Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I'm in the market for a good open source network based Pub/Sub (observer pattern) library. I haven't found any I like:
JMS - tied to Java, treats message contents as dumb binary blobs
NDDS - $$, use of IDL
CORBA/ICE - Pub/Sub is built on-top of RPC, CORBA API is non-intuitive
JBOSS/ESB - not too familiar with
It would be nice if such a package could to the following:
Network based
Aware of payload data, users should not have to worry about endian/serialization issues
Multiple language support (C++, ruby, Java, python would be nice)
No auto-generated code (no IDLs!)
Intuitive subscription/topic management
For fun, I've created my own. Thoughts?
You might want to look into RabbitMQ.
As pointed-out by an earlier post in this thread, one of your options is OpenSplice DDS which is an Open Source implementation of the OMG DDS Standard (the same standard implemented by NDDS).
The main advantages of OpenSplice DDS over the other middleware you are considering can be summarized as:
Performance
Rich support for QoS (Persistence, Fault-Tolerance, Timeliness, etc.)
Data Centricity (e.g. possibility of querying and filtering data streams)
Something that I'd like to understand is what are your issues with IDL. DDS uses IDL as language-independent way of specifying user data types. However DDS is not limited to IDL, you could be using XML, if you prefer. The advantage of specifying your data types, and decoupling their representation from a specific programming language, is that the middleware can:
(1) take away from you the burden of serializing data,
(2) generate very time/space efficient serialization,
(3) ensure end-to-end type safety,
(4) allow content filtering on the whole data type (not just the header like in JMS), and
(5) enable on-the wire interoperability across programming languages (e.g. Java, C/C++, C#, etc.)
Depending on the system or application you are designing, some of the properties above might not be useful/relevant. In that case, you can simply generate one, a few, "DDS Type" which is the holder of you serialized data.
If you think about JMS, it provides you with 5 different topic types you can use to send your data. With DDS you can do the same, but you have the flexibility to define exactly the topic types.
Finally, you might want to check out this blog entry on Scala and DDS for a longer discussion on why types and static-typing are good especially in distributed systems.
-AC
We use the RTI DDS implementation. It costs $$, but it supports many quality of service parameters.
There is a free DDS implementation called OpenDDS, but I've not used it.
I don't see how you can get around the need to predefine your data types if the target language is statically typed.
Look a bit deeper into the various JMS implementations.
Most of them are not Java only, they provide client libraries for other languages too.
Suns OpenMQ have at least a C++ interface, Apache ActiveMQ provides client side libraries for many common languages.
When it comes to message formats, they're usually decoupled from the message middleware itself. You could define your own message format. You could define your own XML schema and send XML messages. You could send BER encoded ASN.1 using some 3. party library if you want.
Or format and parse the data with a JSON library.
You might be interested in the MUSCLE library (disclaimer: I wrote it, so I may be biased). I think it meets all of the criteria you specified.
https://public.msli.com/lcs/muscle/
Three I've used:
IBM MQ Series - Too Expensive, hard to work with.
Tico Rendezvous - (renamed now to EMS?) Was very fast, used UDP, could also be used with no central server. My favorite but expensive and requires a maint fee.
ActiveMQ - I'm using this currently but finding it crashes frequently. Also it's requires some projects ported from Java like spring.net. It works but I can't recommend it due to stability issues.
Also used MSMQ in an attempt to build my own Pub/Sub, but since it doesn't handle it out of the box your stuck writing a considerable amount of code.
There is also OpenSplice DDS. This one is similar to RTI's DDS, except that it's LGPL!
Check it out:
IBM Webpshere MQ, and the licence is not too expnsive if you work on a corporate level.
You might take a look at PubSubHubbub. It's a extension to Atom/RSS to alow pubsub through webhooks. The interface is HTTP and XML, so it's language-agnostic. It's gaining increasing adoption now that Google Reader, FriendFeed and FeedBurner are using it. The main use case is blogs and stuff, but of course you can have any sort of payload.
The only open source implementation I know of so far is this one for the Google AppEngine. They say support for self-hosting is coming.
Related
I would like to know some popular frameworks that are available for implementing CQRS, ES, Saga in the application.
As a part of my research, I have to compare these frameworks and evaluate them based on various -ilities.
I have to compare these [event-sourcing] frameworks and evaluate them based on various -ilities.
The premise of the question is that you need a framework to implement event sourcing but, in fact, you do not.
Greg Young, one of the most influential proponents of event sourcing, frequently expresses his misgivings about frameworks. See, for instance, his QCon London 2013 keynote, esp. mark 9'.
Event sourcing is conceptually simple and doesn't need the kind of magic that frameworks typically bring with them. For instance, rebuilding the state from a stream of events simply consists in a left fold over the stream in question. Moreover, you don't necessarily need a specialised database; I know people who have successfully implemented event sourcing by simply appending events to a file.
If your research aims at comparing event-sourcing frameworks, I would argue that you should consider the case where no framework is used at all.
Axon is a popular framework/server for building CQRS/ES applications.
EventStoreDB is a popular EventStore database for the EventSourcing part.
A simple starting point if you want to write your own framework/library is to check out some of the code I co-authored at https://www.cqrs.nu/
If you are looking for a managed solution, you can also check out what we at Serialized provide.
In addition to Axon, on the JVM there's also the Akka ecosystem (the cluster sharding, persistence, sharded daemon process, and projection modules are the most relevant to CQRS/ES/DDD). One benefit of Akka Persistence is the ability to choose from a variety of datastores to use as an event store (JDBC SQL databases and Cassandra are the most common, but there are many more supported). My experience with it has been that it is capable of exceptionally high availability and since it allows a stateful event-sourced application to be deployed as if it's stateless (e.g. in Kubernetes without needing an operator) there's a lot of deployment flexibility. Note that because it's built on the actor model, a lot of JVM observability tooling doesn't work particularly well with it (often assuming a stronger mapping of threads to tasks), so certain commercially-licensed observability tooling is recommended.
Additionally, Kalix also provides a polyglot (all you need is to express domain logic in a language which supports grpc) event-sourcing implementation.
Disclaimer: since answering this question (almost a year after answering this question), I became employed by Lightbend, the maintainers of Akka and provider of Kalix.
I am trying to figure out what's the difference between transferring dicom files with a (SCU/SCP) like pynetdicom3 vs using the wado api.
Both methods can be used for transferring dicom files. But I can't figure out what's the standard use case for each?
First of all, you can implement all common use cases with both approaches. The difference is rather in technology you are using and systems you want to interface with than in features supported by the one or the other approach.
The "traditional" TCP/IP based DICOM services have been developed since 1998. They are widely spread and widely supported by virtually all current systems in the field. From the nowadays perspective they may appear a bit clumsy and they have some built-in glitches (e.g. limitation to 127 presentation contexts). Still they are much more common than the web-based stuff.
Especially when it comes to communication use cases across different sites, it is hard to implement them with the TCP/IP based protocols.
The WADO services have been developed by the DICOM committee to adopt new technology and facilitate DICOM implementation for application based on web technology. They are quite new (in terms of the DICOM Standard ;-) ).
Having said that the major use case are web-based applications, I have not seen any traditional modalities supporting them yet, and I do not expect them to come up in the near future. This is because, you can rely on PACS supporting TCP/IP based DICOM but you would have to hope for WADO.
There is a tendency for PACS systems to support WADO in addition to TCP/IP to facilitate integration of web viewers and mobile devices where an increasing number of applications only supports WADO.
So my very subjective advice would be:
For an application that is designed for the usage within a hospital: Stick with TCP/IP based DICOM, since you can be quite sure that it will be supported by the systems you are going to interface with.
If connectivity via internet is a major use case, or your application uses a lot of web technology, consider using WADO but investigate the support for WADO among the relevant systems you need to interface with. This probably depends on the domain your application is targeting.
To add to the already very good answer by #kritzel_sw - WADO is only part of the picture. WADO is for retrieving images over the web. There's also STOW or STore Over the Web and QIDO or Query based on ID for DICOM Objects for storing new objects to PACS and querying the PACS respectively.
I think we will see it more and more in the future and not only for web based DICOM viewers, but also normal DICOM communications between the systems. It's especially useful for the cases where one of the systems is not DICOM aware and the developers are also not experienced in DICOM.
Consider a use case from my own experience. We want doctors to be able to upload photographs of skin conditions of their patients and send these photos to our PACS. It's much easier and probably cheaper to commision some developer to do it with STOW, where the specification is basically "take the JPG photo uploaded by the user, add necessary metadata in JSON format according to spec and send it all to this address with an HTTP POST request" rather than "convert uploaded JPG files to valid DICOM objects with the necessary metadata, transfer syntax etc and implement a C-STORE SCU to send it to our PACS". For the first job you can get any decent developer experienced in web dev, for the second you need to find someone who already knows what DICOM is with all its quirks or pay someone a lot to learn it.
That's why I love all these new web-based DICOM options and see great future for those.
While learning more about HTTP 1.1 and reading the spec, it strikes me that it could be helpful to have a public reference implementation which can demonstrate the protocol. I imagine it would provide ideal, basic examples, as well as working examples of those parts of the protocol which are often disabled on public servers (e.g., TRACE).
I'm talking about a running, publicly accessible server(s). The idea would be to show how HTTP (should) works via an actual running webserver(s) (and the source). A user could build arbitrary requests using fiddler or the like, to see how the server responds. I'm assuming it would be open source. It would likely be based on an existing webserver implmentation (e.g., Apache), perhaps with extensions to support the entire protocol where the existing impl. doesn't (Transfer-Encoding compression, etc.). I know this last part is a pipe dream, I'm just putting it here by way of explanation.
I understand that HTTP is a very broad protocol, so that a reference implementation would not be comprehensive. I can imagine many, many reasons why something like this would not exist, and I know I can start up my own local server and play around with it (I've done that sort of thing for years). I know I can poke around against well-known existing public servers (Google, etc.). But, I'm wondering if anything like a public reference implementation exists.
As an IETF spec, HTTP/1.1 does not have a reference implementation. Instead,
"at least two independent interoperating implementations
with widespread deployment and successful operational experience"
were required.
From the Implementation report for HTTP/1.1 to Draft Standard, you can see there were substantially more than that:
We have implementation & testing reports from 26 implementations
You say:
I can imagine many, many reasons why something like this would not exist
Here's one: for a reasonably complex specification, you don't want people designing to a specific implementation. Any "reference" implementation would have bugs, which would then be picked up by subsequent code built against that reference.
The specification is authoritative; in the case that an implementation diverges, you should consult the specification (and its errata) for the correct behaviour.
I know I can poke around against well-known existing public servers
Exactly. Per The Tao of IETF:
"We believe in rough consensus and running code"
as a software engineer I want to tell your system when something needs to be done. I want to provide the implementation code of what needs to be done. I want your system to call into my code and execute my implementation. I want my code to execute in its own processing space and probably on my own infrastructure and servers. As a software engineer, I favor convention over configuration.
I need this feature because often times I work on service agreements for customers to deliver specialized, one off solutions, and I dont want to build this plumbing all of the time for each new client.
I simply want to write some code that does some work using my resources, and I want your system to begin the execution of my code.
NSB should be able to meet your needs. You will be able to get messages from external systems that don't talk to an MS platform by exposing your endpoint(s) as WCF services(built-in). NSB also supports Pub/Sub as well as many other message patterns. As long as the exchanges can be unidirectional, you should be off to a good start. NSB will handle all of the underlying plumbing you speak and will ensure that messages don't get lost.
I have a small RIA that I built as a learning/make-my-life-easier project that uses Flex and ASP.Net. Currently, my architecture utilizes straight HTTP posts and the server responds with xml. I posted another question about security in my web app and some people mentioned SOAP. SOAP is something I've never actually used and I was wondering what the pros/cons were to using SOAP over my current architecture and then subsequently, how much work is require to convert such an application to utilize SOAP.
Thanks,
Chris
Since you have already implemented your own message schemas for sending and receiving, then SOAP in of itself will not give you any added value. The added value comes from SOAP's support for the WS-* standards, covering security, transactions, and several other goodies. The recommended way to take advantage of all that is to use WCF rather than ASP.NET, because WCF supports the latest versions of those standards.
FYI when trying to use SOAP with FLEX - XMLDecoder in Flex does not currently decode some complex data types appropriately, making it appear that you are not receiving all data. I have tracked the error down to the XMLDecoder where I can see the correct data is received, but is not appropriately packaged in the ResultEvent requiring me to override the XMLDecoder, which, I am sure you can imagine, is not very fun. Just wanted to put in my 2 cents. If you are thinking of moving in that direction it would probably be nice to know it doesn't always work for very complicated data types (in my example a collection of objects containing 2 strings and 2 arrays - only returns a collection of 1 string). Granted, it does work 99% of the time, but not always.