I have been asked to create a system that accepts invoices from a company via AS2 EDI (and in the very soon future from many other companies). Through some research I came to the conclusion that I need a Biz Talk server to translate the company's invoice, convert it into XML, and then send that XML to a system we have for processing/validation. I am completely stumped as to how to make all of this work.
I've been learning what I can from Microsoft's BizTalk tutorials & videos, and a little bit from Pluralsight. But there are things that I just don't get at all. One of those things is customer interaction: how are they supposed to know what data to send to us (what document do I give them?), or how do I read this paper they sent me listing their message encryption. How does X12 or EDIFACT tie into all of this?
Do I have this right: I am supposed to create an X12 document with the fields (data) I need in order to process their invoice, and then I am supposed to send them this X12 document and say "here, send us this thing"? And then on my side, create the mapping from that X12 document, the orchestration for validation, and then return them a success or fail?
What resources can I use to learn how to answer these questions? Where do people even go to learn BizTalk Server when they're beginners?
I really appreciate any help from anyone. Thank you for reading.
AS2 is a standard for data transport security, EDI being an Electronic data interchange, so AS2 EDI means the standard for exchanging Electronic Documents securely.
EDIFACT is standard for electronic documents which are tagged flat files with a nested looping structure which dates back to when electronic documents needed to be as small as possible due to slow transmission speeds. If you are dealing with invoices you will probably dealing with EDIFACT INVOICE D96A or similar.
X12 is the Accredited Standards Committee X12, which is a standards organisation that set standards to be followed. It is a different format of document to EDIFACT (see EDI X12 vs UN/EDIFACT
I think you will find that various customers will use different electronic document formats and even if they use the same EDIFACT document format they will use it differently from each other (not all interpreting the standard the same way).
And not all of them will use AS2. So you have to make your solution extensible enough to be able to accept multiple different incoming formats & protocols and respond with different formats outgoing.
It sounds like you are a bit out of your depth here and what you are asking is not easy to start of with and the scope of your question is a bit overly broad for Stackoverflow.
Learning BizTalk on your own without being mentored would be very hard.
There are some useful books out there such us Packt's Microsoft BizTalk Server 2010 Patterns and Microsoft's BizTalk Server 2010 Exam 70-595 Preparation that teach you the basics. There are also many blogs out there that are useful.
If you want your project to succeed however I would recommend hiring some people experienced in the EDI field.
A few things:
You don't have to create the X12 transactions, BizTalk Server ships with schemas X12 and EDIFACT. You probably don't even need to customize them.
In the beginning, you can probably rely on the Trading Partners to provide Companion Guides, especially if they have a more established EDI implementation.
If they don't, and you need to create something, an spreadsheet is really all they need listing exactly which Segments and Elements need to be populated and how.
If you Bing the transaction you have to work with, "x12 850" for example, you will find other companies companion guides you can 'take inspiration' from.
Related
I am trying to figure out what's the difference between transferring dicom files with a (SCU/SCP) like pynetdicom3 vs using the wado api.
Both methods can be used for transferring dicom files. But I can't figure out what's the standard use case for each?
First of all, you can implement all common use cases with both approaches. The difference is rather in technology you are using and systems you want to interface with than in features supported by the one or the other approach.
The "traditional" TCP/IP based DICOM services have been developed since 1998. They are widely spread and widely supported by virtually all current systems in the field. From the nowadays perspective they may appear a bit clumsy and they have some built-in glitches (e.g. limitation to 127 presentation contexts). Still they are much more common than the web-based stuff.
Especially when it comes to communication use cases across different sites, it is hard to implement them with the TCP/IP based protocols.
The WADO services have been developed by the DICOM committee to adopt new technology and facilitate DICOM implementation for application based on web technology. They are quite new (in terms of the DICOM Standard ;-) ).
Having said that the major use case are web-based applications, I have not seen any traditional modalities supporting them yet, and I do not expect them to come up in the near future. This is because, you can rely on PACS supporting TCP/IP based DICOM but you would have to hope for WADO.
There is a tendency for PACS systems to support WADO in addition to TCP/IP to facilitate integration of web viewers and mobile devices where an increasing number of applications only supports WADO.
So my very subjective advice would be:
For an application that is designed for the usage within a hospital: Stick with TCP/IP based DICOM, since you can be quite sure that it will be supported by the systems you are going to interface with.
If connectivity via internet is a major use case, or your application uses a lot of web technology, consider using WADO but investigate the support for WADO among the relevant systems you need to interface with. This probably depends on the domain your application is targeting.
To add to the already very good answer by #kritzel_sw - WADO is only part of the picture. WADO is for retrieving images over the web. There's also STOW or STore Over the Web and QIDO or Query based on ID for DICOM Objects for storing new objects to PACS and querying the PACS respectively.
I think we will see it more and more in the future and not only for web based DICOM viewers, but also normal DICOM communications between the systems. It's especially useful for the cases where one of the systems is not DICOM aware and the developers are also not experienced in DICOM.
Consider a use case from my own experience. We want doctors to be able to upload photographs of skin conditions of their patients and send these photos to our PACS. It's much easier and probably cheaper to commision some developer to do it with STOW, where the specification is basically "take the JPG photo uploaded by the user, add necessary metadata in JSON format according to spec and send it all to this address with an HTTP POST request" rather than "convert uploaded JPG files to valid DICOM objects with the necessary metadata, transfer syntax etc and implement a C-STORE SCU to send it to our PACS". For the first job you can get any decent developer experienced in web dev, for the second you need to find someone who already knows what DICOM is with all its quirks or pay someone a lot to learn it.
That's why I love all these new web-based DICOM options and see great future for those.
Does UFT 12.02 supports ISO8583 & NDC protocol testing? If yes, can you advice the basic steps to configure the same in UFT.
The short answer is No.
They list Financial Services as one of the supported environments, but my experience that it is not financial transaction testing.
The longer version is while you can technically configure UFT to send these messages (if straight text) by capturing examples and sending them back through. Though when I was given a chance to use it as a trial and did this, it did not do it very effectively and had very high overhead compared other tools for financial transactional testing on the market than UFT. Especially for dynamic creation of data, packed vs. unpacked, BER TLV, encryption keys, or manipulation of data elements, and things that regularly change it is not very effective.
I would consider the following options, each of these vendor's have specific solutions for each interface and have many pre-built ISO-8583 modules you can license or you can create them on your own.
Paragon's FasTest
ACI ASSET
FIS's Clear-2-Pay (formerly Lexcel)
Paragon also has a product called ATMeMulator and ConfigBuilder for NCR & Diebold I have seen no equivalent product (or at least anything in the ballpark of features/functionality) out there for building and testing loads, screens, states, transactions, faults, etc.
I am very new to dealing with HL7 and my company recently began a very large project in which we will be receiving various ADT messages in the HL7 v2.4 specification. We already use BizTalk extensively here and the plan was to leverage the BTAHL7 accelerator for BizTalk 2010 to accept these messages.
My issue is this, the ADT messages we are receiving from our trading partner do not match the HL7 v2.4 specifications for pretty much all of the messages we are receiving (Even though the MSH segment is for V2.4 and they've told us that is the version they will be sending files in).
For instance they are sending us A04 messages and in the PV1-3 field the spec calls for 9 subcomponents (separated by the standard ^ delimiter). What they are sending in that field is 11 subcomponents.
Example:
F1^F2^F3^F4^F5^F6^F7^F8^F9^F10^F11
instead of this (which would match the spec):
F1^F2^F3^F4^F5^F6^F7^F8^F9
This also happens for the PV1-42 field.
After scouring the internet I can't find any help for dealing with this kind of situation in BizTalk using the accelerator. It seems to me that people can customize the data within the message and that happens often (for instance sending strings where the spec calls for an int) but cannot change the actual layout (the situation I have listed above) when dealing with HL7 and BizTalk. These messages fail even when I don't set up BizTalk to validate body segments or custom data types (which makes sense to me and is to be expected because they aren't sending strange data that still conforms to the layout of the specs, but rather an entirely different layout).
My question is this. Is there a way to deal with this utilizing the accelerator functionality without having to write custom code to "fix" the files before sending them to the accelerator pipelines? According to our trading partner this is just the way their product (Cloverleaf) sends the data and that they are already working with various other trading partners with this format.
Yes. Unless the Trading Partner is doing something that does not follow HL7 convention, you can handle such customization by modifying the HL7 message schema to accomodate the differences.
In this case, just add two additional child elements to the PV1 to accept the new data.
You will also have to change the TargetNamespace of the modified schema to isolate it to this Trading partner and set that on...one of the tabs (sorry don't recall exactly) in HL7 Configuration.
"What makes a good BizTalk project" is a question I was asked recently by a client's head of IT. It's rather open ended, so rephrasing it slightly to :
"what are you top ten best practices for a BizTalk 2006 and onwards projects - not limited to just technical practices, eg organisational"
I wrote an article called "Top 10 BizTalk Server Mistakes" that covers some key best practices in terms is usable information rather than a simple list. Here's the listing:
Using orchestrations for everything
Writing custom code instead of using existing adapters
Using non-serializable types and wrapping them inside an atomic transaction
Mixing transaction types
Relying on Public schemas for private processing
Using XmlDocument in a pipeline
Using ‘specify now' binding
Using BizTalk for ETL
Dumping debug/intermediate results to support debugging
Propagating the myth that BizTalk is slow
...and the link to the complete article: [Top 10 BizTalk Server Mistakes] (http://artofbabel.com/columns/top-x/49-top-10-biztalk-server-mistakes.html)
The key point is to emphasize to the client that BizTalk is a swiss army knife for interop... an expensive swiss army knife. A programmer can wire up two enterprise systems with a WCF application as fast as you can with BizTalk. The key things to include/require when using BizTalk is to:
Have more than simple point integrations. If this is all you have, fine, see the rest.
Have all or a portion if a process that is valuable going theough BizTalk so that you can instrument it with BAM and provide process monitoring to the organization... maybe even some BI.
If you are implementing a one to many or many to one scenario, use of the BizTalk ESB patterns will pay deividends in th elong run
When there are items that need to be regularly tweaked - threshholds, URI'ss, etc... use of the Business Rules Engine can provide an easily maintainable solution.
When endpoints might be semi connected, BizTalk bakes in queueing of messages for no extra effort.
Complicated correlations or ordering of messages.
Integrating with exisitng enterprise systems can be simplified with the adapter packs provided as part of BizTalk. This alone can save big bucks. Asking Oracle, PeopleSoft or Siebel folks about XML and Web Services can be a challenging experience. The adapters get you and BizTalk through the enterprise apps' front door and reduces the work for themsignifcantly.
There are more I just can't think of at midnight.
Any of these items make BizTalk a winning candidate because so much of it is given to you with the platform. If you are not being required to provide any of these, you should really attempt to deliver some of these beneftis in highly visible way to the client. if you don't it's just an expensive and under utilized swiss army knife.
I'll start with Environment and Deployment Planning. Especially testing deployment and matching your QA/Stage (whatever the pre-production environment is) to the production environment so you don't find out some weirdness at midnight when you are trying to go live.
I have a situation where a single Oracle system is the data master for two seperate CRM Systems (PeopleSoft & Siebel). The Oracle system sends CRUD messages to BizTalk for customer data, inventory data, product info and product pricing. BizTalk formats and forwards the messages on to PeopelSoft & Siebel web service interfcaes for action. After initial synchronization of the data, the ongoing operation has created a situation where the data isn't accurate in the outlying Siebel and PeopleSoft systems despite successful delivery of the data (this is another converation about what these systems mean when they return a 'Success' to BizTalk).
What do other similar implementations do to reconcile system data in this distributed service-oriented approach? Do they run a periodic dump from all systems for comparison? Are there any other techniques or methodologies for spotting failed updates and ensuring synchronization?
Your thoughts and experiences are appreciated. Thanks!
Additional Info
So why do the systems get out of synch? Whenevr a destination syste acknolwedges to BizTalk it has received the message, it means many things. Sometimes an HTTP 200 means I've got it and put it in a staging table and I'll commit it in a bit. Sometimes this is sucessful, sometimes it is not for various data issues. Sometimes the HTTP 200 means... yes I have received and comitted the data. Using HTTP, there can be issues with ordere dlivery. All of tese problems could have been solved with a lot of architehtural planning up front. It was not done. There are no update/create timestamps to prevent un-ordered delivery from stepping on data. There is no full round trip acknowledgement of data commi from destinatin systems. All of this adds up to things getting out of synch.
(sorry this is an answer and not a comment working my way up to 50 points).
Can the data be updated in the other systems or is it essentially read only?
Could you implement some further validation in the BizTalk layer to ensure that updates wouldn't fail because of data issues?
Can you grab any sort of notification that the update failed from the destination systems which would allow you to compensate in the BizTalk layer?
FWIW in situations like this I have usually end up with a central data store that contains at least the datakeys from the 3 systems that acts as the new golden repository for the data, however this is usually to compensate for multiple update sources. Seems like we also usually operate some sort of manual error queue that users must maintain.
To your idea of batch reconciliation I have seen that be quite common to compensate for transactional errors especially in the financial services realm.