EDI ANSI X12 204 SEF file - ansi

I am looking for EDI x12 4010-204 SEF file. Please let me know where can find this file.
Actually, I need the details of all the EDI-204 segments and there corresponding qualifiers. Please help me if anyone knows where can I find this.
Thanks,
Nitin

A similar question arose from someone dealing with Amazon. I strongly suggest you reach out to your trading partner for their standards -- this is the normal behavior in the EDI world. Since EDI provides general guidelines, but the actual implementation within even a single transaction like the 850 (purchase order), varies widely from customer to customer.

You don't technically need a SEF file. In our case, we went straight to the in-house developed solution path by writing our own classes and programs to translate EDIFACT messages into XML and then capturing the data we needed into SQL Server tables.
First you need to determine what data you want out of the EDI message. Next, you need to translate the message into a readable format such as XML (the SEF file is a map of the EDI which is why you think you need it. Yes it helps but it's not mandatory). Once you have it translated, then you can extract what you need.
I strongly suggest you read and study this great effort in CodeProject...
https://www.codeproject.com/Articles/11278/EDIFACT-to-XML-to-Anything-You-Want
Remember, EDI files are really a "message packet" and are not files that one would convert into another file such as CSV or even XML. You need to firstly "map" the EDI message into a meaningful layout that you can then "pull out" data from, to meet your needs.
Think of EDI as being the precursor to BLOCKCHAIN. Yes, Bitcoin. Remember that!? Blockchain is the modern version of EDI messaging and Business-2-Business processing.

Related

OpenAS2 edict order to Woo order how?

we have successfully setup the OpenAS2 (https://github.com/OpenAS2/OpenAs2App)to send messages between our partners.
We are receiving orders from our partner in the EDI (edifact) format. Does anyone have an suggestion how to best translate this order and get it into our WooCommerce server as an order. Woo has an api to place orders:
https://woocommerce.github.io/woocommerce-rest-api-docs/#create-an-order but not sure how to go from edifact order to Woo order. Any suggestions ?
What programming languages are in your tech stack? You need to convert the segments/elements into XML/JSON that the API requires. EDI is just text formatted a specific way. If you have Python, you might give BOTS a try. There are a lot of open source parsers for EDIFACT. Or, you could go the commercial translator route.

BizTalk core format: redundant but all according to standard or "keep it simply stupid"?

I'm quite new in BizTalk mapping and my question now is:
Let's say that I need to receive UBL document, convert it to my Biztalk Core and send out same UBL. Of course I can perform 1:1 UBL->UBL without core format but I may need this core if I need to send out something else - EDIFACT, OIOXML, whatever else, so I believe it's a good practice to use core. So it looks like UBL -> Incoming map -> Core -> Outgoing map -> UBL.
So the question is: what is the best practice to create core format schema?
My incoming file must meet all OIOUBL standards so I have to use something pre-defined as XSD schema (e. g. like this: http://www.oioubl.info/Classes/en/Order.html). The same for outgoing file.
But on the other hand I know for the fact that in my case this standard contains a lot of redundant fields. I will never use some of these fields or parameters; some other are constants and there's no need to store it - we can just define default values in outgoing map...and so on.
So my question is: what is the best practice to build core file? Is it better to use full UBL xsd which meets all standards, even if it's redundant (in this case it will simplify incoming and outgoing maps - I can just use 1:1 mass copy) or it's better to KISS and to simplify core as it's possible using only fields that I really need and adding something one by one if I need anything else?
This question is not about code - just about what is the best practice.
Thanks. I would appreciate any advice.
Best practice would be for the internal core or canonical schema usually to match the inbound external schema (apart from target namespace) so you don't lose any data from the first mapping, as quite often when you find you do need to send it to another system and need another outbound mapping the fields you require might include on that you didn't for the original system. Of course to any rule there is an exception, and it is a matter of judgement as to when it isn't appropriate to do this.

Interpreting edi mscons files

I have en EDI file in mscons format. I am trying to parse the file in R and save it as a csv file. However, I do not have any good explanation how to proceed. Anyone out there worked with these sort of files?
Example:
UNA:+.? '
UNB+UNOC:3+7080005046091:14:TIMER+102953452626:82:TIMER+140312:2152+XGATE019452198++++1'
UNH+1+MSCONS:D:96A:ZZ:E2NO6A'BGM+7+1488136+9+NA'
DTM+137:201403121751:203'DTM+163:201403030000:203'
DTM+164:201403092400:203'DTM+ZZZ:1:805'
NAD+FR+7080005046053::9+++++++NO'
NAD+DO+953452626:NO3:82+++++++NO'UNS+D'
NAD+XX'LOC+90+707057500071137750::9'
RFF+MG:97645'RFF+LI:22446237_17506927'
LIN+1++1491:::SM'MEA+AAZ++KWH'QTY+136:1'
DTM+324:201403030000201403030100:Z13'QTY+136:1'
DTM+324:201403030100201403030200:Z13'QTY+136:2'
DTM+324:201403030200201403030300:Z13'QTY+136:1'
DTM+324:201403030300201403030400:Z13'QTY+136:1'
DTM+324:201403030400201403030500:Z13'QTY+136:2'
DTM+324:201403030500201403030600:Z13'QTY+136:1'
DTM+324:201403030600201403030700:Z13'QTY+136:1'
DTM+324:201403092300201403092400:Z13'CNT+1:167181'
UNT+6832+1'UNZ+1+XGATE019452198'
Download this application to start: EDI Notepad
Open your EDIFACT file in this tool. This will help you with context. What each segment / element is. It should also help give you context related to qualifiers and envelopes in the documents. You should find the source of the document and get an implementation guide, which will also explain their specific usage.
Once you apply context and understand what the elements are, parsing becomes easy. You can write your own parser, use an open source product like BOTS (mentioned in the comments above, or purchase commercial translation software (hundreds available).
The elements within the MSCONS file are well documented. See here: http://www.edi-energy.de - the latest description (in German) is available here: http://www.edi-energy.de/files2/MSCONS_2_2b_Fehlerkorrektur_2014_02_27.pdf

Are there any tools for diffing HTTP requests/responses?

I am trying to debug some problems with very picking/complex webservices where some of the clients that are theoretically making the same requests are getting different results. A debugging proxy like Charles helps a lot but since the requests are complex (lots of headers, cookies, query strings, form data, etc) and the clients create the headers in different orders (which should be perfectly acceptable), etc. it's an extremely tedious process to do manually.
I'm pondering writing something to do this myself but I was hoping someone else had already solved this problem?
As an aside does anyone know of any Charles-like debugging proxies that are completely opensource? If Charles were open source I would definitely contribute any work I did on this front back to the project. If there is something similar out there, I would much rather do this than write an separate program from scratch (especially since I imagine Charles or any analog already has all of the data structures I might need etc).
Edit:
Just to be clear -- text diffing will not work as the order of lines (e.g. headers at least) may be different and/or the order of values within lines (e.g. cookies at least) can be different and in both cases as long as the names and values and metadata are all the same, the different ordering should not cause requests that are otherwise the same to be considered different.
Fiddler has such an option, if you have WinDiff in your path. I don't know though if it will suit your needs, because at first glance it's jus doing text comparisions. But perhaps it normalizes the sessions before that, so I can't say.
If there's nothing purpose built for the job, you can use packet capture to get the message content saved to a text file (something that inserts itself in the IP stack like CommView). The you can text diff the results for different messages.
Can the open-source proxy Squid maybe help?

Reading a COBOL DAT file

I have been given a set of COBOL DAT, IDX and KEY files and I need to read the data in them and export it into Access, XLS, CSV, etc. I do not know the version, vendor of the COBOL code as I only have the windows executable that created the files.
I have tried Easysoft and Parkway ODBC drivers but I have not been successful in reading the data from the files.
I do not have access to the source code as the company that was distributing this product shut down.
I have successfully read some of the dat files using http://www.cobolproducts.com/datafile just now which I came to know through another forum. Most probably I will work with them to help me read the rest of the files that I am having an issue with.
A few possibilities.
1/ See if you can find the names of the people that worked for the company. They may be helpful.
2/ Open the DAT file in a text editor. The data may be decodable from that. If the basic format can be discerned, quick'n'dirty code can be written to extract it.
3/ Open up the executable in an editor, there may be strings in there that indicate which compiler was used, then you can search for info on its file formats. If it's a DOS application, there's a good chance it was either Microsoft or Fujitsu COBOL.
4/ Consider placing job requests on work sites like elance or rentacoder; I don't think there's a cost if the work can't be done successfully.
5/ Hire someone to examine it and advise on the likelihood of recovery.
6/ Get a screen dump of the record contents for every active record and re-construct it from that.
Some of these are pretty hard so your mileage may vary.
Good luck.
I have read COBOL DAT files only with FD, when I do not have the FD, I open the file in a Text Editor, and try to guess the columns, and try again, until I have this working, the big problem with this approach is when the DAT file have COMP columns, that can be any kind of COMP type, but with a litthe patience I cold get this done.
I had tryed Parkway ODBC, but without success.
for anyone going through this journey, I found this in sourceforge: Cobol and RPG data reader and converter
http://sourceforge.net/projects/cobol2j/
Im about to try it, sounds kind of promising

Resources