Modality work list - Which items are returned for C-FIND request of a sequence? - dicom

My question is a really basic question. Consider to query a modality work list to get some work items by a C-FIND query. Consider using a sequence (SQ) as Return Key attribute for the C-FIND query, for example: [0040,0100] (Scheduled Procedure Step) and universal matching.
What should I expect in the SCP's C-FIND response? Or, better say, what should I expect to find with regards of the scheduled procedure step for a specific work item? All the mandatory items that Modality Work List Information Model declare as encapsulated in the sequence? Should I instead explicitly issue a C-FIND request for those keys I want the SCP return in the response?
For example: if I want the SCP return the Scheduled Procedure Step Start Time and Scheduled Procedure Start Date, do I need to issue a specific C-FIND request with those keys or querying for Scheduled Procedure Step key is enough to force the SCP to send all items related to the Scheduled Procedure Step itself?

Yes, you should include the Scheduled Procedure Step Start Time / Date Tags into the 0040,0100 sequence.
See also Service Class Specifications (K6.1.2.2)
This will not ensure you will retrieve this information, because it depends on the Modality Worklist Provider, which information will be returned.
You could also request a Dicom Conformance Statement from the Modality Provider to know the necessary tags for request/retrieve.

As for table K.6-1, you can consider it as showing only the requirement of the SCP side or what SCP is required to use for matching key (i.e. query filter) and additional required attribute values to return (i.e. Return Key) with successful match. It is up to SCP’s implementation to support matching against required key but you can always expect SCP to use the values in matching key for query filter.
Also note that, SCP is only required to return values for attributes that are present in the C-FIND Request. One exception is the sequence matching and there you have the universal matching like mechanism where you can pass a zero length ITEM to retrieve entire sequence. So as stated in PS 3.4 section C.2.2.2.6, you can just include an empty ITEM (FFFE, E000) element with VR of SQ under Scheduled Procedure Step Sequence (0040, 0100) for universal matching.

Related

Can you save a result (Given) to a variable in a Gherkin feature file, and then compare the variable with another result (Then)? (Cucumber for Java)

I am new to Cucumber for Java and trying to automate testing of a SpringBoot server backed by a MS SQL Server.
I have an endpoint "Get All Employees".
Writing the traditional feature file, I will have to list all the Employees in the #Then clause.
This is not possible with thousands of employees.
So I just want to get a row count of the Employee table in the database, and then compare with the number of objects returned from the "Get All Employees" endpoint.
Compare
SELECT count(*) from EMPLOYEE
with size of the list returned from
List<Employee> getAllEmployees()
But how does one save the rowcount in a variable in the feature file and then pass it into the stepdefs Java method?
I have not found any way that Gherkin allows this.
After writing a few scenario and feature files, I understood this about Cucumber and fixed the issue.
Gherkin/Cucumber is not a programming language. It is just a specification language. When keywords like Given, Then are reached by the interpreter, the matching methods in Java code are called. So they are just triggers.
These methods are part of a Java glue class. Data is not passed out of the Java class and into the gherkin feature file. The class is instantiated at the beginning and retained until the end. Because of this, it can store state.
So from my example in the question above, the Then response from a Spring endpoint call will be stored in a member variable in the glue class. The next Then invocation to verify the result will call the corresponding glue method which will access the data in the member variable to perform the comparison.
So Gherkin cannot do this, but Java at a lower level in the glue class, can.
You could create a package for example named dataRun (with the corresponding classes in the package) and save there the details during the test via setters.
During the execution of the step "And I get the count of employees from the data base" you set this count via the corresponding setter, during the step "And I get all employees" you set the number via the dedicated setter.
Then during the step "And I verify the number of employees is the same as the one in the data base" you get the two numbers via the getters and compare them.
Btw it's possible to compare the names of the employees (not just the count) if you put them into list and compare the lists.

Optimize ReplaceDocumentAsync with property check in CosmosDb

I am using DocumentDb and would like to replace a document only if the property of the document takes a certain value. Notice that all stored documents have this property (and the value can never be empty).
The only way I found is to do this in 3 steps:
1) Read the document with ReadDocumentAsync
2) Check if the resource response has the property value I expect
3) If step 2 returns true then do the replace with ReplaceDocumentAsync, otherwise do something else
I am concerned about the additional request charge and latency as this is 2 queries to the db. Is that the only way with the current .Net SDK or is there a more clever way?
Thank you
You could optimize this by using a Stored Procedure that executes directly in the database. The order of operations would be the same, you would include your document as part of the payload to the SPROC but there would be no round trips or extra latency.

Marketo Leads - How to find the updated values of progressionStatus field

I need to get the Marketo Leads who have changes on "progressionStatus" field (inside membership) with the API.
I can get all the leads related to a Program (with Get Leads by ProgramID API) without issues, but my need is to get those Leads with changes on "progressionStatus" column.
I was thinking to use the CreatedAt / UpdatedAt fields of the Program, so then, get all the leads related to those programs. But I didn't get the accurate results that I want.
Also, I tried to use the GET Lead changes API and use "fields" parameter to "progressionstatus" but that field don't exist.
It is possible to resolve this?
Thanks in advance.
You can get the list of Leads with progression status change by querying against the Get Lead Activities endpoint.
The Get Lead Changes endpoint could sound as a good candidate, but that endpoint only returns changes on the lead fields. Progression status change is not stored on the lead directly, so at the end that won't work. On the other hand the Get Leads by ProgramId endpoint returns –amongst others– the actual value of progressionStatus (program status of the lead in the parent program) but not the “change” itself, so you cannot process the resultset based on that.
The good news is that the progression status change is an activity type and luckily we have the above mentioned Get Lead Activities endpoint (which is also mentioned as the Query in the API docs) available to query just that. This endpoint also allows for filtering by activityTypeIds to narrow down the resultset to a single activity type.
Basically you have to call the GET /rest/v1/activities.json enpoint and pass the values of activityTypeIds and nextPageToken as query parameters (next to the access token obviously). So, first you need to get the internal Id of the activity type called “Change Status in Progression”. You can do that by querying the GET /rest/v1/activities/types.json endpoint and look for a record with that name. (I don't know if this Id changes from instance to instance, but in ours it is the #104). Also, to obtain a nextPageToken you have to make a call to GET /rest/v1/activities/pagingtoken.json as well, where you have to specify the earliest datetime to retrieve activities from. See more about Paging Tokens.
Once you have all of these bits at hand, you can make your request like that:
GET https://<INSTANCE_ID>.mktorest.com/rest/v1/activities.json?activityTypeIds=<TYPE_ID>&nextPageToken=<NEXTPAGE_TOKEN>&access_token=<ACCESS_TOKEN>
The result it gives is an array with items like below, which is easy to process further.
{
"id":712630,
"marketoGUID":"712630",
"leadId":824864,
"activityDate":"2017-12-01T08:51:13Z",
"activityTypeId":104,
"primaryAttributeValueId":1104,
"primaryAttributeValue":"PROGRAM_NAME",
"attributes":[
{"name":"Acquired By","value":true},
{"name":"New Status ID","value":33},
{"name":"Old Status ID","value":32},
{"name":"Reason","value":"Filled out form"},
{"name":"Success","value":false},
{"name":"New Status","value":"Filled-out Form"},
{"name":"Old Status","value":"Not in Program"}
]
}
Knowing the leadIds in question, you can make yet another request to fetch the actual lead records.

Write conflict in Dynamo

Imaging that there are two clients client1 and client2, both writing the same key. This key has three replicas named A, B, C. A first receives client1's request, and then client2', while B receives client2's request, and then client1's. Now A and B must be inconsistent with each other, and they cannot resolve conflict even using Vector Clock. Am I right?
If so, it seems that it is easy to occur write conflict in dynamo. Why so many open source projects based on dynamo's design?
If you're using Dynamo and are worried about race conditions (which you should be if you're using lambda)
You can check conditionals on putItem or updateItem, if the condition fails
e.g. during getItem the timestamp was 12345, add conditional that timestamp must equal 12345, but another process updates it, changes the timestamp to 12346, your put/update should fail now, in java for example, you can catch ConditionalCheckFailedException, you can do another get item, apply your changes on top, then resubmit the put/update
To prevent a new item from replacing an existing item, use a conditional expression that contains the attribute_not_exists function with the name of the attribute being used as the partition key for the table. Since every record must contain that attribute, the attribute_not_exists function will only succeed if no matching item exists.
For more information about PutItem, see Working with Items in the Amazon DynamoDB Developer Guide.
Parameters:
putItemRequest - Represents the input of a PutItem operation.
Returns:
Result of the PutItem operation returned by the service.
Throws:
ConditionalCheckFailedException - A condition specified in the operation could not be evaluated.
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/AmazonDynamoDB.html#putItem-com.amazonaws.services.dynamodbv2.model.PutItemRequest-
Can't talk about HBase, but I can about Cassandra, which is inspired in Dynamo.
If that happens in Cassandra, the most recent key wins.
Cassandra uses coordinator nodes (which can be any node), that receive the client requests and resends them to all replica nodes. Meaning each request has its own timestamp.
Imagine that Client2 has the most recent request, miliseconds after Client1.
Replica A receives Client1, which is saved, and then Client2, which is saved over Client1 since Client2 is the most recent information for that key.
Replica B receives Client2, which is saved, and then Client1, which is rejected since has an older timestamp.
Both replicas A and B have Client2, the most recent information, and therefore are consistent.

Firebase better way of getting total number of records

From the Transactions doc, second paragraph:
The intention here is for the client to increment the total number of
chat messages sent (ignore for a moment that there are better ways of
implementing this).
What are some standard "better ways" of implementing this?
Specifically, I'm looking at trying to do things like retrieve the most recent 50 records. This requires that I start from the end of the list, so I need a way to determine what the last record is.
The options as I see them:
use a transaction to update a counter each time a record is added, use the counter value with setPriority() for ordering
forEach() the parent and read all records, do my own sorting/filtering at client
write server code to analyze Firebase tables and create indexed lists like "mostRecent Messages" and "totalNumberOfMessages"
Am I missing obvious choices?
To view the last 50 records in a list, simply call "limit()" as shown:
var data = new Firebase(...);
data.limit(50).on(...);
Firebase elements are ordering first by priority, and if priorities match (or none is set), lexigraphically by name. The push() command automatically creates elements that are ordered chronologically, so if you're using push(), then no additional work is needed to use limit().
To count the elements in a list, I would suggest adding a "value" callback and then iterating through the snapshot (or doing the transaction approach we mention). The note in the documentation actually refers to some upcoming features we haven't released yet which will allow you to count elements without loading them first.

Resources