I have some EDI messages (X12, HL7, etc ...) stored in an Oracle database. I sometimes want to pull out individual fields (e.g. ISA-03). Currently, I have some really ugly sql. I'd like to create a PL/SQL package to make it easier and was wondering if anybody had already done this.
I imagine something like:
select
edi.x12.extract_field( clob_column, 'ISA', 4)
from
edi_table
While I never stored the HL7 message as is in a database it should be possible.
The idea of HL7 (and XML) is that it's a common format for systems to use to transfer information. It was never designed as a "storable" item. Usually, I would pull the data out of the warehouse format into a particular HL7 message and send it to the MQHub/eGate for transmitting. On the return do the opposite extract the fields I'm warehousing and save those. I.E. HL7 should not be stored so I don't have one.
Enough of the lecture. :)
I would suggest a function/procedure per segment and split the message into a temp table.
example of split in oracle
Related
I want to issue something e.g. a new option. Inside the flow where I'm issuing this new option, I need to get info from two separate oracles that need to provide data for the output state.
How should I do this... should I have one output and 3 commands? command with data from Oracle 1, command with data from Oracle 2 and then the issue command? or can this be done with one command?
It's entirely up to you - your command can contain whatever you data you want, so in theory, you could do the whole with one command.
Having said that, I would probably split it out into at least two commands for clarity and privacy. The privacy element is you can build a filtered transaction for the oracle to sign that only contains the oracle command.
If you don't mind the two oracles seeing the data sent to each for signing, you could encapsulate the data in one command e.g.
class OracleCommand(val spotPrice: SpotPrice, val volatility: Volatility) : CommandData
Where one oracle attests to spotPrice and the other to Volatility.
However, you would find it hard to determine what part of the data they attested to since they will both sign over the entire filtered transaction.
Unless you knew the design of the oracle could specifically pick out the correct data, you're probably better off going with three separate commands.
I am finding a solution to handle dynamic data / list to all nodes or specified nodes at the time creating a contract in Corda. I don’t think Oracle is a good approach to use in my case for the following reasons:
The data can be a list of for example legal entity names, they are not from outside world, not a single value;
The list is depended on particular field(s) selected, therefore will need perhaps a centralized place to maintain the data relationship;
Appreciate if anyone can help on this. Thanks.
Kwan
This question is a little difficult to answer without further details on your use-case. However, on the surface, an Oracle doesn't sound like a bad solution:
The data provided by an oracle can be a list
The term "outside world" simply refers to any information not included in the transaction itself. This term should not be taken too literally.
Ultimately, you can think of an Oracle as a provider of "official" data. You request a command including the data from the oracle, include it in the transaction, and the oracle will sign over the transaction if and only if it agrees that the data in the command is true. As long as the Oracle is trusted by all parties involved, this allows data from outside the transaction to be included in the transaction in a reliable way.
I have multiple flatfiles (CSV) (with multiple records) where files will be received randomly. I have to combine them (records) with unique ID fields.
How can I combine them, if there is no common unique field for all files, and I don't know which one will be received first?
Here are some files examples:
In real there are 16 files.
Fields and records are much more then in this example.
I would avoid trying to do this purely in XSLT/BizTalk orchestrations/C# code. These are fairly simple flat files. Load them into SQL, and create a view to join your data up.
You can still use BizTalk to pickup/load the files. You can also still use BizTalk to execute the view or procedure that joins the data up and sends your final message.
There are a few questions that might help guide how this would work here:
When do you want to join the data together? What triggers that (a time of day, a certain number of messages received, a certain type of message, a particular record, etc)? How will BizTalk know when it's received enough/the right data to join?
What does a canonical version of this data look like? Does all of the data from all of these files truly get correlated into one entity (e.g. a "Trade" or a "Transfer" etc.)?
I'd probably start with defining my canonical entity, and then look towards the path of getting a "complete" picture of that canonical entity by using SQL for this kind of case.
Generating the response schema for a typed stored procedure, the stored procedure did some database updates prior to returning the final resultset. The response schema generated by Visual Studio has quite some garbage.
Is there a way to force it to generate a cleaner schema?
The StoredProcedureResultset4 is the only one that matters.
Here's my same answers from MSDN. Unfortunately, the marked Answer will not work for you since there is no way, or it's really, really hard, to capture and suppress result sets from a called Stored Procedure.
The cause is related to the Stored Procedure code.
The Wizard will only generate Schema types for elements that are returned in the response from SQL Server. Meaning, the Stored Procedure is emitting results for those updates so you're getting metadata for them.
The way to solve this is by modifying the SP code to not emit any result on any operation that shouldn't. Basically, if you see it in the result window in SQL Management Studio, you will get schema for it.
status and message are presumably the result of another SP so one way to suppress that is to assign the result to a temp table thus redirecting it form the output stream.
However, if StoredProcedureResultset4 is all that matters, that's all you have to use. There's nothing wrong with just ignoring all the other results provided they always appear in the same order.
Just to be clear, you still have to write the wrapper that suppresses the unwanted results, simply invoking the original SP from a new SP will not change the output, you'll still get the extra result sets.
In fact, a wrapper would be the harder implementation since you'd have to capture and examine all results sets which I don't think is possible.
The more correct way to do this in BizTalk would be a Port Map that strips the unwanted content.
How do I pass a dataset object to a stored procedure? The dataset comprises multiple tables and I'll need to be able to access them from within the SQL.
You can use Table valued parameter for passing single table in SQL 2008 http://msdn.microsoft.com/en-us/library/bb675163.aspx
or
refer to this article and use SQL CLR procedure to pass dataset http://blogs.msdn.com/b/jpapiez/archive/2005/09/26/474059.aspx
It looks like you can do this with SQL Server 2008 or newer (at least with a DataTable). Here are the links:
http://www.eggheadcafe.com/community/aspnet/10/10138579/passing-dataset-to-stored-procedure.aspx
http://www.sqlteam.com/article/sql-server-2008-table-valued-parameters
As the article from MusiGenesis' answer states
In SQL Server 2005 and earlier, it is
not possible to pass a table variable
as a parameter to a stored procedure.
When multiple rows of data to SQL
Server need to send multiple rows of
data to SQL Server, developers either
had to send one row at a time or come
up with other workarounds to meet
requirements. While a VB.Net developer
recently informed me that there is a
SQLBulkCopy object available in .Net
to send multiple rows of data to SQL
Server at once, the data still can not
be passed to a stored proc.
At the risk of stating obvious here are two more approaches
Parametrize your processing procedure
You might re-evaluate if you truly and really need to pass a general table variable. While sometimes this can not be avoided the reason why this is a later addition to the set of features that MS SQL Server has is partially because usually you can get around it by structuring your stored procedures and the flow of your data processing.
If you are able to 'parametrize' your process then you should be able to let stored procedures retrieve full dataset based on a limited number of parameters.
This will make the process less flexible, but it will also make it more controlled, which is not a bad thing (similarly like the database which interfaces with applications only on the level of stored procedures is more robust, this approach also, by limiting the flexibility reduces the number of possible cases and consequently the number of possibly unhandeled cases. read: security holes and general bugs)
Temp tables
Besides the above there's always approach with temp tables, which can be more or less complicated, depending on the scope of sharing that you need on the data (sharing can be between db users, app users, connections, processes, etc..).
Nice side effect is that such approach would allow persistence of the process (which bring you closer to having undo, redo and ability to continue interrupted work).