I want to search for remote patients, studies...images using the DICOM C-FIND Service Class.
DICOM Part 5 offers a phletory of possible encodings (Transfer Syntaxes) - which one should I choose?
For C-FIND, the minimum DICOM requirement is supporting Implicit Little Endian (Transfer Syntax UID: 1.2.840.10008.1.2).
That means, you can rely on any DICOM conformant server to support it and you cannot expect that all DICOM conformant servers support something else (although most of them do).
Reference in the DICOM standard
Related
I am trying to figure out what's the difference between transferring dicom files with a (SCU/SCP) like pynetdicom3 vs using the wado api.
Both methods can be used for transferring dicom files. But I can't figure out what's the standard use case for each?
First of all, you can implement all common use cases with both approaches. The difference is rather in technology you are using and systems you want to interface with than in features supported by the one or the other approach.
The "traditional" TCP/IP based DICOM services have been developed since 1998. They are widely spread and widely supported by virtually all current systems in the field. From the nowadays perspective they may appear a bit clumsy and they have some built-in glitches (e.g. limitation to 127 presentation contexts). Still they are much more common than the web-based stuff.
Especially when it comes to communication use cases across different sites, it is hard to implement them with the TCP/IP based protocols.
The WADO services have been developed by the DICOM committee to adopt new technology and facilitate DICOM implementation for application based on web technology. They are quite new (in terms of the DICOM Standard ;-) ).
Having said that the major use case are web-based applications, I have not seen any traditional modalities supporting them yet, and I do not expect them to come up in the near future. This is because, you can rely on PACS supporting TCP/IP based DICOM but you would have to hope for WADO.
There is a tendency for PACS systems to support WADO in addition to TCP/IP to facilitate integration of web viewers and mobile devices where an increasing number of applications only supports WADO.
So my very subjective advice would be:
For an application that is designed for the usage within a hospital: Stick with TCP/IP based DICOM, since you can be quite sure that it will be supported by the systems you are going to interface with.
If connectivity via internet is a major use case, or your application uses a lot of web technology, consider using WADO but investigate the support for WADO among the relevant systems you need to interface with. This probably depends on the domain your application is targeting.
To add to the already very good answer by #kritzel_sw - WADO is only part of the picture. WADO is for retrieving images over the web. There's also STOW or STore Over the Web and QIDO or Query based on ID for DICOM Objects for storing new objects to PACS and querying the PACS respectively.
I think we will see it more and more in the future and not only for web based DICOM viewers, but also normal DICOM communications between the systems. It's especially useful for the cases where one of the systems is not DICOM aware and the developers are also not experienced in DICOM.
Consider a use case from my own experience. We want doctors to be able to upload photographs of skin conditions of their patients and send these photos to our PACS. It's much easier and probably cheaper to commision some developer to do it with STOW, where the specification is basically "take the JPG photo uploaded by the user, add necessary metadata in JSON format according to spec and send it all to this address with an HTTP POST request" rather than "convert uploaded JPG files to valid DICOM objects with the necessary metadata, transfer syntax etc and implement a C-STORE SCU to send it to our PACS". For the first job you can get any decent developer experienced in web dev, for the second you need to find someone who already knows what DICOM is with all its quirks or pay someone a lot to learn it.
That's why I love all these new web-based DICOM options and see great future for those.
In DICOM, following are the classes defined for C-Find and C-Move at Study Root.
Study Root Query/Retrieve Information Model - FIND: 1.2.840.10008.5.1.4.1.2.2.1
Study Root Query/Retrieve Information Model - MOVE: 1.2.840.10008.5.1.4.1.2.2.2
I have implemented Query Retrieve SCP and SCU in multiple applications. In all those cases, I always implemented both the classes. I do C-Find first to get the list of matching data. Then based on result, I do (automatically or manually) C-Move to get the instances. All those implementations are working fine.
Recently, I am working on one application that combines DICOM with other private protocol to fulfill some specific requirements. It just stuck to my mind if it is possible to directly do C-Move without doing C-Find as SCU?
I already know the identifier (StudyInstanceUID) to retrieve and I also know that it does present on SCP.
I looked into specifications but could not found anything conclusive. I am aware that C-Find and C-Move could be issued by SCU to SCP on different connections/associations. So in first glance, what I am thinking looks possible and legal.
I worked with many third party DICOM applications; none of them implements SCU the way I am thinking. All SCUs implement C-Find AND C-Move both.
Question:
Is it DICOM legal and practical to implement Query Retrieve SCU C-Move command without C-Find command? Please point me to the reference in specifications if possible.
Short answer: Yes this is perfectly legal per DICOM specification.
Long answer: Let's consider the DCMTK reference DICOM Q/R implementation. It provides a set of basic SCU command line tools, namely findscu and movescu. The idea is to pipe the output of findscu to movescu to construct a valid C-MOVE (SCU) request.
In your requirement you are simply replacing the findscu step with a private implementation that does not rely on the publicly defined C-FIND (SCU) protocol but by another mechanism (extension to DICOM).
So yes your C-MOVE (SCU) implementation is perfectly valid, since there is no requirement to provide C-FIND (SCU) during this query.
I understand you are not trying to backup an entire database using C-MOVE (SCU), that was just a possible scenario where someone would be trying to use C-MOVE (SCU) without first querying with a valid C-FIND (SCU) result.
All . Forgive me just a newbie in the DICOM. And was reading the DIMSE part of the DICOM standard. Found both c-find and c-get have the query/retrieve functionality against DICOM PACS server.
So I tried to summary the difference between them.
C-get will trigger one or more c-store operation between SCU and SCP.
C-get is the query for the image. But c-find is just the query for the attributes except the image.
C-find would return multiple response messages if there exist multiple DICOM for the query criteria.
Please help to review my understanding. Correct me if there is any error. Thanks.
You use C-Find command for query and C-GET command for retrieval of DICOM storage instances (images, reports etc.). C-Get is performed in the single association (connection) but not commonly used. Instead, C-Move is used for retrieval of DICOM storage instances and which uses a different association (connection) and role reversal (SCP acts as SCU) to send the data to destination (caller or another SCP/Server)
I'm a bit confused on encryption file formats.
Let's say I want to encrypt a file with AES-256. I run the file through the encryption algorithm and I now have a stream of encrypted bytes.
I obviously can write that stream of bytes to a file, but any third-party encryption application is not going to understand it since it's not expecting just a raw stream of encrypted bytes.
Into what file formats can I write that so that other encryption tools can understand it?
The ones I know of (I think) are:
PKCS#7
ASN.1
DER
PEM
PKCS#8
but I'm not sure how they all relate to each other.
Apparently the AESCrypt utility also has a format, which appears to be its own proprietary format:
http://www.aescrypt.com/aes_file_format.html
Is there a cheatsheet anywhere on this stuff? I've been googling and found bits and pieces, but never felt like I had the complete picture.
PKCS#8 is not an encrypted-file format, it's a format for private keys.
ASN.1 and DER are rules for translating a structured message into binary. They are not, in and of themselves, a file format, although they're used to define and describe file formats.
PKCS#7 is closely related to PEM, and they're both formats for public-key encrypted files. They are defined in terms of base-64 encapsulated DER encoded ASN.1 messages. They are the basis of the S/MIME format for secure internet mail. (see RFC3851)
In parallel with S/MIME is the OpenPGP file format, also mainly designed for public-key encrypted files. (See RFC4880)
In both S/MIME and OpenPGP formats, there is a block which contains symmetric-key encrypted data. It is possible to create valid S/MIME or OpenPGP files containing only this block. In this way, the S/MIME (a.k.a. PKCS#7) and OpenPGP formats can be used for symmetric-key encryption also.
AES is an encryption algorithm, not a file format.
As you point out, there are lots of knobs and levers on the algorithm - key strength is one. AES-256 just means, the AES algorithm with 256-bit key. But there are lots of other knobs. Mode, for one. AES has a number of modes: CBC, ECB, OFB, CFB, CTR, and others. Another is the IV, which applies to some modes. Padding is another. Usually these knobs are exposed in the AES api for whatever framework you're using.
In most cases AES is combined with other crypto technology - for example password-based key derivation (PBKDF2) is often used to generate keys or IVs. MAC's are often used to verify the integrity of the encrypted data.
Different tools use AES to encrypt, and if they want their data to be readable, they publish the list of knobs they use, and how they are set, as well as how any related crypto technology might be used.
When creating a file format, you'll need to store or publish those kinds of things, if you want your file to be readable by other applications.
You might want to look into Crypt4GH which was standardized at the end of 2019.
Crypt4GH, a new standard file container format from the Global
Alliance for Genomics and Health (GA4GH), allows genomic data to
remain secure throughout their lifetime, from initial sequencing to
sharing with professionals at external organizations.
From what I can see it is similar - in terms of crypto - to NaCl's crypto_box, but with the advantage of formalizing a file format on disk.
JSON Web Encryption RFC 7516 is an IETF standard that can do what you are looking for, it can handle AES in addition to other crypto algorithms.
JSON Web Encryption (JWE) represents encrypted content using JSON-
based data structures [RFC7159]. The JWE cryptographic mechanisms
encrypt and provide integrity protection for an arbitrary sequence of
octets.
There implementing of JWE in multiple languages for example in Java you can use nimbus
what is the best way to support zeroconf names in the location segment of a URI design?
RFC 3986 (Uniform Resource Identifier (URI): Generic Syntax) makes no mention of zeroconf and i fear the URI syntax is not designed to work beyond DNS resolution.
the ideal answer syntax will:
conform to the generic URI syntax
handle zeroconf names with multi-byte characters
Here are some options:
dnssd://local/_printer._tcp/Fancy printer/
dnssd://Fancy printer._printer._tcp.local
These strings are IRIs, not URIs, in order to address i18n issues.
a nested URI is another approach that is backward compatible. a nested URI defines the location for use with discovery protocols like zeroconf.
see 'nested uniform resource identifiers' manuel urena and david larrabeiti
however i dont find evidence this approach in is wide use.
A universal resource identifier is a generic way to locate a resource. For internationalization, you may want to check on IRI, which is almost the same, but allows for full Unicode compatibility. The reason that it doesn't mention Zeroconf is that URI is a generalized protocol. Zeroconf may use URIs are part of its protocol for discovery, but URI will never use specific implementation in its protocol (you won't find ftp:, https:, mailto:, skype: etc in there either)
Zeroconf is a protocol to automatically configure your network and discover available services. It consists of three parts, address selection (part of IPv4/6), name resolution (mDNS) and service discovery (UPnP (Microsoft), DNS-SD (Apple)). Modern operating systems support all of this out of the box.
If we take UPnP, the discovery is done based on a URI, iirc. The returned information is given in XML. XML can be any Unicode encoding. If you're a device driver manufacturer, you can place any character in there. The final phase may be presentation, which is a URL but that's optional.
A URI / URL both support internationalized characters, but only escaped and not in the domain name part.
-- Abel --