FHIR Search Slots request - dstu2-fhir

I am implementing FHIR server and for some un-avoidable reason I do not have access to doctor's schedule, however, I have access to the slots available for appointment booking.
I can get slots from 4 parameters using
doctor id
organization id
location id
date of slot
Will below be considered as valid slot query using FHIR :
http://localhost:8080/context/fhir/Slot?practitioner=Practitioner/123456789&organization=Organization/1234&location=Location/2&start=2016-07-25
Also, in the response to above query, since reference to Schedule is absolutely necessary (Slot has card=1..1 for Schedule reference), can I pass reference value as something like :
"schedule": {
"reference": "Schedule/notrequired"
}
in Slot response ?

Unfortunately, right now, you do have to expose a Schedule, but there isn't any reason it has to be "real". The way we have currently implemented Slot searching is by exposing a dummy Schedule with the only data element being the link to the actor. For example:
<Schedule xmlns="http://hl7.org/fhir">
<id value="1234" />
<actor>
<display value="Cooper Thompson, MD" />
<reference value="http://host/api/FHIR/DSTU2/Practitioner/1234" />
</actor>
Our Slot search ends up looking like this (with some edits for brevity and clarity, specifically around the slottype):
http://host/api/FHIR/DSTU2/Slot?Schedule.actor:Practitioner=1234&Schedule.actor:Patient=5678&slottype=urn:oid:1.2.3|Cardiology&start=2016-07-21
Note that this is technically invalid, as a Slot can only have one Schedule, and we are including multiple chained search parameters for Schedule. We also make use of extensions to send back the patient, practitioner, and location associated with the slot, since Slot.schedule is 1:1. However this "intentional misuse" is the best option I've found without forcing the client to become the scheduling system and deal with lining up slots for each resource.
There are some tracker items (9989, 9208) in the FHIR gforge about making updates to Slot to be more friendly to "simple clients". We'd appreciate your input :).

I might be missing something here, but not sure how you are defining the difference between the slot and schedule?
The Schedule resource simply defines a period of time that slots may exist within, and for which other resource. It does not define or expose the appointments that may exist during this time.
The slot search parameters do not define any search parameters as you've implied. These are all on the schedule resource that it links from.
A Practitioner, Location and Patient can each have their own schedule/slots and thus it depends on the system where the complexity is defined.
Some systems decide that they are only going to worry about practitioners (who have their own room), others only worry about the room and will allocate practitioners later.
From my understanding of what I think you're trying to so (creating a FHIR Façade in front of a practice management system) I think you will need to expose the following resources:
Practitioner: To expose the details of the practitioner (interested if your practitioners can work in multiple locations)
Schedule: To simply expose the date range that you are accepting appointments (and will have slot availability defined) and the practitioner is linked to this resource, if they work at multiple locations, you'll have one of these for each location the practitioner works. (If the location resources have their own schedule, then will need to further consider it, and where the negotiation for available slots is done)
Slot: To define the available slots that appointments could be scheduled into. (note: these are not appointments)
Appointment: To receive the created appointments (Not sure how you're going to handle this if you don't have access to the schedule)
Patient: Assuming that you want to assign patients to the appointments ;)
If this all makes sense and you clarify your environment, I'll put in the likely queries that you'll need to handle.
This was a great question, and Patient Administration is planning to write some implementation guidance on implementing this functionality in the various environments (General Practice, inpatient, outpatient, community, lab, etc.)

Related

How can we access NetworkMapCache in Contract-States library of CordApp

I am trying to implement an Validator class in Contract-States library of CordApp, which have several validation methods that are inherited by Model classes in their init() fun, so that each time a Model class is called/initialized the validation happens on the spot.
Now I'm stuck at a point, where I need to validate whether the incoming member name(through a Model class) matches with Organisation name of the node, I need to access the NetworkMap for that. How can I do that?
In Work-Flow library each flow extends FlowLogic class that implements ServiceHub interface and through that we can access the NetworkMap, but how to do that in Contract-States library?
P.S. - I'm trying to avoid any circular dependency (Contract-States lib should not depend on Work-Flow lib)
The short answer, you can't.
The long answer:
The difference between flow validations and contract validation is that the latter (contracts) must be deterministic meaning for the same input they must always give the same output whether it's now or after 100 years, in the current node or any other node.
The reason for that is because any time (even in the future) when a node receives a transaction it must validate that transaction which includes validating the inputs which in return requires validating the transaction that created those inputs and so on, until you get a fully validated graph of all the inputs that lead to the outputs that were used as inputs and so on.
That's why the contract should return the same result any time, and that's why it should be deterministic, and that's why contracts (unlike flows) don't have access to external resources (like HTTP calls, or even the node's database).
Imagine if the contract was relying on the node's database for some validation rule, as you know, states are distributed on a need to know basis (i.e. only to participants of the state); so one node might have the state that you're using as validation source and another node won't, so the contract's output (transaction valid/invalid) will differ between nodes, and that breaks the deterministic concept.
Contracts only have access to the transaction components: inputs, outputs, attachments, signatures, time-windows, reference states.
Good news, there are other ways to implement your requirement:
Using an attachment that has the list of nodes that are allowed to be part of the transaction, this method should be used if the blacklist is not updated frequently and you can see the example here.
Using reference states, where you can create a state that has the allowed parties and require the existence of that reference state in your transaction; this method should be used when the blacklist is more frequently updated. You can read about reference states here.
Using Oracles, this option is in case there is a world organization (or for instance Ministry of Trade of some country) that provides an Oracle which returns the list of blacklisted parties; and you use that Oracle in your transaction. You can read about Oracles here.

Whats the best way to generate ledger change Events that include the Transaction Command?

The goal is to generate events on every participating node when a state is changed that includes the business action that caused the change. In our case, Business Action maps to the Transaction command and provides the business intent or what the user is doing in business terms. So in our case, where we are modelling the lifecycle of a loan, an action might be to "Close" the loan.
We model Event at a state level as follows: Each Event encapsulates a Transaction Command and is uniquely identified by a (TxnHash, OutputIndex) and a created/consumed status.
We would prefer a polling mechanism to generate events on demand, but an asynch approach to generate events on ledger changes would be acceptable. Either way our challenge is in getting the Command from the Transaction.
We considered querying the States using the Vault Query API vaultQueryBy() for the polling solution (or vaultTrackBy() for the asynch Obvservalble Stream solution). We were able to create a flow that gets the txn for a state. This had to be done in a flow, as Corda deprecated the function that would have allowed us to do this in our Springboot client. In the client we use vaultQueryBy() to get a list of States. Then we call a flow that iterates over the states, gets txHash from each StateRef and then calls serviceHub.validatedTransactions.getTransaction(txHash) to get signedTransaction from which we can ultimately retrieve the Command. Is this the best or recommended approach?
Alternatively, we have also thought of generating events of the Transaction by querying for transactions and then building the Event for each input and output state in the transaction. If we go this route what's the best way to query transactions from the vault? Is there an Observable Stream-based option?
I assume this mapping of states to command is a common requirement for observers of the ledger because it is standard to drive contract logic off the transaction command and quite natural to have the command map to the user intent.
What is the best way to generate events that encapsulate the transaction command for each state created or consumed on the ledger?
If I understand correctly you're attempting to get a notified when certain types of ledger updates occur (open, approved, closed, etc).
First: Asynchronous notifications are best practice in Corda, polling should be avoided due to the added weight it puts on the node for constant querying and delays. Corda provides several mechanisms for Observables which you can use: https://docs.corda.net/api/kotlin/corda/net.corda.core.messaging/-corda-r-p-c-ops/vault-track-by.html
Second: Avoid querying transactions from the database as these are intended to be internal to the node. See this answer for background on why to avoid transaction querying. In general only tables that begin with "VAULT_*" are intended to be queried.
One way to solve your use case would be a "status" field which reflects the command that was used to produce the current state. For example: if a "Close" command was used to produce the state it's status field could be "closed". This way you could use the above vaultTrackBy to look at each state's status field and infer the action that occured.
Just to finish up on my comment: While the approach met the requirements, The problem with this solution is that we have to add and maintain our own code across all relevant states to capture transaction-level information that is already tracked by the platform. I would think a better solution would be for the platform to provide consumers access to transaction-level information (selectively perhaps) just as it does for states. After all, the transaction is, in part, a business/functional construct that is meaningful at the client application level. For example, If I am "transferring" a loan, that may be a complex business transaction that involves many input and output states and may be an important construct/notion for the client application to manage.

How does corda verification work for transactions between multiple states?

I'm currently tying to make a CordApp That will be used for DVP, but I'm having trouble understanding some key concepts. For instance, I understand that Contracts apply to one type of state in particular. What I don't really get is if the contract validation logic should apply to only that state object or all the states that will be in the given transaction.
The typical example would be the issuance of a sell order :
The input of a transaction the state of the stock account of the issuer and the outputs are the sell Order and the modified stock account.
Basically my question is were do I do checks like : I don't sell more than I own, the sum of the number of stocks in the sell order and what is left in the account is equal to what was initially in the account, ... ?
I have followed the Corda tutorials but I'm still not clear on that logic.
It boils down to the Orchestration Layer (flows or APIs, what the user intends to do) vs the Ledger layer (what the user can do. IE - guaranteed shared logic).
Contract code absolutely must be adhered to, so in your case not being able to sell more than you own would be part of the explicit contract.
The CMN guide here helped me conceptualise.
Flows are better described as business logic, so anything can be achieved within a flow as long as it adheres to the contract.
A security consideration: Anybody may create a flow, and they are equally able to use any Asset (and thus it's state) within their third-party flow.
It is the relevant contract that ensures that your asset is used for the purpose you imagine, and that it isn't used malevolently.

Should I use DTOs or domain objects in flex when connecting to blazeds/java/jpa using remote-services?

I am working on an Adobe Flex front-end with a Java back-end using JPA for persistence. The protocol I am using is remote objects (AMF) implemented with BlazeDS.
I started out with a service-facade and entity DAOs, but without any specific DTOs. The same POJOs, the domain objects, were passed in the service-facade as those used as DTOs passed to the Hibernate DAOs.
However, the latest few days I have been thinking whether this is a good approach or not. The latest article on the subject I read was this one:
Interesting JPA Pattern blog
The situation:
Say I have POJO Book with a unidirection ManyToOne relation with the POJO Category (i.e. each book may only be associated with one category, but the same category may be associated with many books). I see some alternatives:
Alternative 1:
I expose a method/operation addUpdateBook(Book book). In the implementation of this operation I add/update both the book and the referenced category. I mean, if the client submits a book having a category that doesn't exist from before, this would mean that the client implicitly may edit categories using the addUpdateBook operation.
the client is working directly with the domain model!
the entire category information will be sent when a new book is added
even though a reference to the category would be sufficient
Alternative 2:
I expose a method/operation addUpdateBook(Book book,Long categoryId). In the implementation I retrieve the category for the given categoryId and replace the category given in the book POJO and then I persist the book. In other words, I ignore any category in the book object, I just look at the categoryId. This means that the client would need to use another operation in order to modify the category.
pro: the client can still work more or less on the domain model, but ...
con: ... it is confusing for the client that the category of the book
object will be ignored
con: the entire category information of the book will be sent, even
if the server never will read it
pro: it may be more clear when a separate operation should be used
for category modifications
con: I need to retrieve the category before persisting the book. I
guess this means some overhead.
Alternative 3:
I expose a method/operation addUpdateBook(BookDTO bookDto). The POJO BookDTO looks as the POJO Book, but instead of a field Category category it has a field Long categoryId. In the implementation I retrieve the Category for the given categoryId before I persist the Book.
pro: not confusing for the client
con(?): what should the method getBook(Long bookId) return? should it
return only the BookDTO? Then it would be required to invoke also the
operation getCategory(Long categoryId) in order to have "the entire
book information". Then the client would need to set together the
different parts to a local domain representation of the book.
Compared to alternative 1 this would be more complex on the client
side?
con: I need to retrieve the category before persisting the book. I
guess this means some overhead.
con: being forced to use the DTOs in the client makes it deal with physical details thereby and makes it somewhat distant from the actual domain model. It seems like I am missing the point with having an domain model and using JPA in the business layer.
I guess (!) alternative 3 is the way you would design the operations in a SOA context. However, for me, it is not that important to be loosely-coupled between the client and server. My focus is not to provide multiple client-platform support.
Which alternative would you propose? Are there other better alternatives? Do you know any nice resources, such as code examples, which could help me?
I'm using something related to "Alternative 3". In the beginning I've started to use domain objects (probably also because of my experience with dataservices), after a while I found too many problems and I've switched to DTO's. All the publing services are exposing only DTO's (both for input/output parameters).
Some of the problems that I've met during working directly with the domain objects and BlazeDS:
a)you need to break the domain objects encapsulation (like exposing properties, or exposing private constructors) in order to use them for data transfer. Otherwise you will have to write your own serialization/deserialization.
b)you need to use tricks in order to allow data conversion between client/server. For example, using strings instead of dates in order to prevent timezone difference. Or using strings instead of int/double. You can solve some of these issues by writing custom proxies, but I still think that it's easier to use strings instead of other data types.
c)most of the time you don't need all the data from the domain objects, and in order to deal with that you need to use various frameworks with support for data pagination/lazy instantiation on the client. This frameworks are introducing complexity, and I try to stay away from that.
The main disadvantage of using DTO is the amount of boiler code in order to do the conversion between the domain objects-DTOs...but I still prefer using them.

DDD along with collections

suppose I have a domain classes:
public class Country
{
string name;
IList<Region> regions;
}
public class Region
{
string name;
IList<City> cities;
}
etc.
And I want to model this in a GUI in form of a tree.
public class Node<T>
{
T domainObject;
ObservableCollection<Node<T>> childNodes;
}
public class CountryNode : Node<Country>
{}
etc.
How can I automatically retrieve Region list changes for Country, City list changes for Region etc.?
One solution is to implement INotifyPropertyChanged on domain classes and change IList<> to ObservableCollection<>, but that feels kinda wrong because why should my domain model have resposibility to notify changes?
Another solution is to have that responsibility put upon the GUI/presentation layer, if some action led to adding a Region to a Country the presentation layer should add the new country to both the CountryNode.ChildNodes and the domain Country.Regions.
Any thoughts about this?
Rolling INotifyPropertyChanged into a solution is part of implementing eventing into your model. By nature, eventing itself is not out of line with the DDD mantra. In fact, it is one of the things that Evans has implied that has been missing from his earlier material. I don't remember exactly where, but he does mention it in this video; What I've learned about DDD since the book
In and of itself, impelemnting an eventing model is actually legitimate in the domain, because of the fact it introduces a decoupling between the domain and the other code in your system. All you are doing is saying that your domain has the capability of notifying interested parties in the fact that something has changed. It is the responsibility of the subscriber, then, to act upon it. I think that where the confusion lies is that you are only using an INotifyPropertyChanged implementation to relay back to an event sink. That event sink will then notify any subscribers, via a registered callback, that "something" has happened.
However, with that being said, your situation is one of the "fringe" scenarios that eventing does not apply all that well to. You are looking to see if you need to repopulate a UI when the list itself changes. At work, the current solution that we have uses an ObservableCollection. While it does work, I am not a fan of it. On the flip side, publishing a fact that one or more items in the list have changed is also problematic. For example, how would you determine the entropy of the list to best determine that it has changed?
Because of this, I would actually consider this to not be a concern of the domain. I do not think that what you are describing would be a requirement of the domain owner(s), but rather an artifact of the application architecture. If you were to query the services in the domain after the point that the change is made, they would return correctly and the application would still be what is out of step. There is nothing actually wrong inside the world of the domain at this momeent in time.
So, there are a few ways that I could see this being done. The most inefficient way would be to continually poll for changes directly against the model. You could also consider having some sort of marker that indicates that the list is dirty, although not using the domain model to do so. Once again, not that clean of a solution. You can apply those principles outside of the domain, however, to come up with a working solution.
If you have some sort of a shared caching mechanism, i.e. a distributed cache, you could implement a JIT-caching/eviction approach where inserts and updates would invalidate the cache (i.e. evict the cached items) and a subsequent request would load them back in. Then, you could actually put a marker into the cache itself that would indicate something identifiable as to when that item(s) was/were rebuilt. For example, if you have a cache item that contains a list of IDs for region, you could store the DateTime that it was JIT-ed along with it. The application could then keep track of what JIT-ed version it has, and only reload when it sees that the version has changed. You still have to poll, but it eliminates putting that responsibility into the domain itself, and if you are just polling for a smaller bit of data, it is better than rebuilding the entire thing every time.
Also, before overarchitecting a full-blown solution. Address the concern with the domain owner. It may be perfectly acceptable to him/her/them that you simply have a "Refresh" button or menu item somewhere. It is about compromise, as well, and I am fairly sure that most domain owners would have a preference of core functionality over certain types of issues.
About eventing - most valuable material I've seen comes from Udi Dahan and Greg Young.
One nice implementation of eventing which I'm eager to try out can be found here.
But start with understanding if that's really necessary as Joseph suggests.

Resources