How to use DCMTK binaries to send Modality Worklist to modalities without receiving Query from them? - dicom

I am using DCMTK storescp.exe to receive images from a CR modality and then process/save them in my DB.
Is it possible to use other DCMTK binary to manually send the PatientName and PatientId to CR modality before the patient goes there?
I have read somewhere that the modality makes a query to get the Modality Worklist. I would like to reverse that flow. I want to directly send the Modality Worklist to the modality, whenever I like, without receiving the query from Modality.
Is that possible? If yes; how can I do that with DCMTK?
Please note that this is not an off-site tool request. I just want to know the DCMTK binary that implements required DICOM service/command.

You are looking for Modality Worklist or MWL service which implements C-FIND command.
SOP Class: 1.2.840.10008.5.1.4.31 [Modality Worklist Information Model – FIND].
But it does not work the way you are expecting; and it should not - for good.
MWL SCU (in your case - CR) initiates the query with the (optional) filters it suits. As usual, association happens and MWL SCP receives the MWL Request. It then fetch the data from its database matching the filters if any. It then sends one MWL Response for each row fetched from database, status for each response is PENDING. When all the rows are transferred, final SUCCESS response is sent. If no rows were found matching the filter, only final response is sent. If something goes wrong, proper failure response is sent. SCU then, sends the Release Request and on receiving Release Response, it closes the association.
Now, why your expected workflow is not possible?
Generally MWL SCP is implemented by RIS systems. These systems have tools/features to register the patient demographic data while/before admission of the patient in hospitals. They also have features to schedule the orders to be executed by Modalities. There might be multiple modalities in given DICOM Network (hospital). Though, RIS have a way to decide which order should go to which modality (based on AE Title if configured and used properly), they cannot push it because they are acting as SCP i.e. Server. As any other server in any network protocol, they have to wait for request from client i.e. SCU.
Further, though SCP may know which order should be sent to which modality, modality may not expecting that order for many reasons. So, the general flow in MWL is the way I explained above. You cannot implement your reverse workflow with any DICOM service/command.
Just for the sake of clarity:
All this has nothing to do with the data you received and stored in DB using storescp.exe. I mean, you do not generally send that data to modality as Modality Worklist.
MWL happens first. When modality get the MWL Worklist item, it conducts the study and acquires images with the demographic data received in MWL Worklist item. This way, errors are avoided, redundant inputs are avoided and flow is bit automated. When done, modality push (C-STORE) the instances (CR images in your case) to C-STORE SCP which is storescp.exe in your case.

Related

Replies are not always delivered to the desired reply listener container

We are applying the request/reply semantics using spring-kafka with the ReplyingKafkaTemplate.
However, we noticed that sometimes the reply does not end up where it should.
Here is a rough description of our setup:
Service A
2 instances
consuming messages from topic-a which has 2 partitions. (each instance gets 1 partitions assigned).
Service A is the initiator.
Service B:
2 instances, consumes messages from topic-b, which also has 2 partitions.
Reacts to incoming messages from A and returns a reply message using the #SendTo annotation.
Observed behaviour:
When an instance of service A, e.g. A1, is sending a message to service B the send fails with a reply timeout. The request is consumed by B successfully and a reply is being returned, however it was consumed by the other instance, e.g. A2. From the logging I can see that A1 get topic-a-0 assigned, whereas A2 gets topic-a-1 assigned.
Suggestions from the docs:
Our scenario is described in this section of the docs: https://docs.spring.io/spring-kafka/reference/html/#replying-template
It gives a couple suggestions:
Give each instance a dedicated reply topic
Use reply partition header and use dedicated partitions for each instance
Our setup is based on a single topic for the whole service. So all incoming events and reply events are send to this and consumed from this topic. So option #1 is not desirable in our situation.
The downside of option #2 is that you cannot use the group management feature, which is a pitty because our services run on Kubernetes so we'd like to use the group management feature for maximum flexibility.
A third option?
So I was wondering if there was a third option:
Why not use group management and determine the assigned topic partitions of the reply container at runtime on the fly when sending a message and set the reply partition header.
It looks like the ReplyingKafkaTemplate#getAssignedReplyTopicPartitions method provides exactly this information.
This way, the partitions are not fixed and we can still use the group management feature.
The only downside I can foresee is that when the partitions are rebalanced after the request was sent but before the reply was received, the request could fail.
I already have tested something to see if it works and it looks like it does. The main reason for me to post this question is to check if my idea makes sense, are there any caveats to take into account. I'm wondering why this is not supported by spring-kafka out of the box.
If my solution makes sense, I am willing to raise an enhancement issue and provide a PR on the spring-kafka project.
The issue, as you describe, is there is no guarantee we'll get the same partition(s) after a rebalance.
The "third option" is to use a different group.id for each instance and set sharedReplyTopic=true. In this case all instances will get the reply and it will be discarded by the instance(s) that did not send the request.
The best solution, however, is to use a unique reply topic for each instance.

DICOM: C-Move without C-Find (Query Retrieve SCU)

In DICOM, following are the classes defined for C-Find and C-Move at Study Root.
Study Root Query/Retrieve Information Model - FIND: 1.2.840.10008.5.1.4.1.2.2.1
Study Root Query/Retrieve Information Model - MOVE: 1.2.840.10008.5.1.4.1.2.2.2
I have implemented Query Retrieve SCP and SCU in multiple applications. In all those cases, I always implemented both the classes. I do C-Find first to get the list of matching data. Then based on result, I do (automatically or manually) C-Move to get the instances. All those implementations are working fine.
Recently, I am working on one application that combines DICOM with other private protocol to fulfill some specific requirements. It just stuck to my mind if it is possible to directly do C-Move without doing C-Find as SCU?
I already know the identifier (StudyInstanceUID) to retrieve and I also know that it does present on SCP.
I looked into specifications but could not found anything conclusive. I am aware that C-Find and C-Move could be issued by SCU to SCP on different connections/associations. So in first glance, what I am thinking looks possible and legal.
I worked with many third party DICOM applications; none of them implements SCU the way I am thinking. All SCUs implement C-Find AND C-Move both.
Question:
Is it DICOM legal and practical to implement Query Retrieve SCU C-Move command without C-Find command? Please point me to the reference in specifications if possible.
Short answer: Yes this is perfectly legal per DICOM specification.
Long answer: Let's consider the DCMTK reference DICOM Q/R implementation. It provides a set of basic SCU command line tools, namely findscu and movescu. The idea is to pipe the output of findscu to movescu to construct a valid C-MOVE (SCU) request.
In your requirement you are simply replacing the findscu step with a private implementation that does not rely on the publicly defined C-FIND (SCU) protocol but by another mechanism (extension to DICOM).
So yes your C-MOVE (SCU) implementation is perfectly valid, since there is no requirement to provide C-FIND (SCU) during this query.
I understand you are not trying to backup an entire database using C-MOVE (SCU), that was just a possible scenario where someone would be trying to use C-MOVE (SCU) without first querying with a valid C-FIND (SCU) result.

Storage Commitment Service (push model): how i get the result back to my SCU?

I planned to implement a Storage Commitment Service to verify if files previously sent to the storage were safely stored.
My architecture is very simple and straightforward my SCU sends some secondary capture images to the storage and I want to be sure they are safely stored before delete them.
I am going to adopt push model and I wonder what steps/features I need to implement to accomplish the service
What I understood is
I need to issue a N-ACTION request with SOP Class UID
1.2.840.10008.1.20.1 and add to the request a transaction identifier together with a list of Referenced SOP Class UID – Referenced SOP
Instance UID where Referenced SOP Instance UID are the UIDs of the
secondary capture images I previously sent to the storage and
Referenced SOP Class UID in my case is the soap class identifier
representing the Secondary Capture Image
Wait for my N-ACTION response to see if the N-ACTION request succeed
or not
Get the response from the storage in form of N-EVENT-REPORT
But when? How the storage give me back the
N-EVENT-REPORT along with the results? Does my SCP AE implements some
SCP features? Or I need to issue a N-EVENT request to get a
N-EVENT-REPORT?
Have a look at the image below copied from here:
Now, about your question, following is the explanation assuming same association will be used for entire communication. For communication over multiple associations, refer above article from Roni.
But when?
Immediately. On same connection/association. On receiving NAction response, you should wait for timeout configured in your application. Before timeout expires, you should get the NEventReport.
How the storage give me back the N-EVENT-REPORT along with the results?
When you receive NAction response from SCP, that means SCP saying "Ok; I understood what you want. Now wait while I fetch your data...". So, you wait. When SCP is ready with all the data (check list) necessary, it just sends it back on same association through NEventReport. You parse the Report and do your stuff and send response to SCP saying "Fine; I am done with you." and close the association.
Does my SCP AE implements some SCP features?
No (in most of the cases); you do not need to implement any SCP features in both (single association/multiple associations) cases. You should get NEventReport on same association as mentioned above. DICOM works on TCPIP. Client/Server concept in TCP is only limited to who establishes the connection and who listens for connections. Once the connection is established, any one can read/write data on socket.
In rare cases, SCP sends NEventReport by initiating new association on its own. In that case, SCU need to implement SCP features. This model is not in use as far as I am aware. It is difficult to implement this model for both SCP and SCU. It also needs multiple configurations which everyone tends to avoid. So, this could be neglected. I am calling this rare because I never (at least so far) come across such implementation. But yes; this is valid case for valid reason.
Or I need to issue a N-EVENT request to get a N-EVENT-REPORT?
No; as said above. Refer this.
J.3.3 Notifications
The DICOM AEs that claim conformance to this SOP Class as an SCP shall invoke the N-EVENT-REPORT request. The DICOM AEs that claim conformance to this SOP Class as an SCU shall be capable of receiving the N-EVENT-REPORT request.
That said, SCU should be able to process NEventReport. It will NOT issue it.
There are three different sequences of events possible. I could describe them here, but this article is really excellent: Roni's DICOM blog
I have nothing to add to what is written there.

What is the difference between c-find and c-get DIMSE?

All . Forgive me just a newbie in the DICOM. And was reading the DIMSE part of the DICOM standard. Found both c-find and c-get have the query/retrieve functionality against DICOM PACS server.
So I tried to summary the difference between them.
C-get will trigger one or more c-store operation between SCU and SCP.
C-get is the query for the image. But c-find is just the query for the attributes except the image.
C-find would return multiple response messages if there exist multiple DICOM for the query criteria.
Please help to review my understanding. Correct me if there is any error. Thanks.
You use C-Find command for query and C-GET command for retrieval of DICOM storage instances (images, reports etc.). C-Get is performed in the single association (connection) but not commonly used. Instead, C-Move is used for retrieval of DICOM storage instances and which uses a different association (connection) and role reversal (SCP acts as SCU) to send the data to destination (caller or another SCP/Server)

How to pre-process Riak data entry before sending it to a client

I have a Riak installation with many nodes. It stores entries, with relatively big blobs on a value side. In some cases, clients will need only a subset of this data, so there is no need to transfer them over the network.
Is there any elegant solution to pre-process this data on the server side, before actually sending them to client.
The only idea I have is to install a small "agent" on every node, that will interact with client in it's own protocol and acts as a proxy, that will reduce data based on query.
But such solution will work only, if I can know (based on key) on which node particular entry is stored. Is there a way to do it?
You may be able to do that with MapReduce, if you specify a single bucket/key as input. The map function would run local to where the data is stored, and send its result to the reduce function on whichever node received the request from the client. Since you are providing a specific key, there shouldn't be any coverage folding which is what causes the heavy load the docs warn about.
This would mean that your requests would effectively be using r=1, so if there is ever an outage you would get some false not found results.

Resources