BizTalk SQL Adapter composite operation, chaining stored procedure calls - biztalk

We need to call three stored procedures on the same database, thinking to use composite operation to wrap them in the same call of the same transaction.
Question is, we need the result of first stored procedure to be used as the input for the 2nd and 3rd procedure, is this doable?
Thanks

No, unfortunately not. The map will run and create the XML that the SQL adapter will use to execute afterwards.
You could look at making a two-way send port that only runs the first stored procedure; and another send port that subscribes to the response of the first send port and runs the second and third procedures.

This is not possible I'm afraid.
The input of your composite operation is an XML instance, where every input parameter is supplied before hand.
If its really necessary to execute these particular stored procedures, you can try wrapping them into one, custom stored procedure, where you are free to to what you want.
One could also try merging the logic from these 3 stored procedures into one new one. Try to think about scalar functions, table types, table valued functions and so on. SQL server has quite the arsenal to let you do what you want.

Yes, you absolutely can to that, but you would not use a Composite Operation.
You would use an Orchestration that performs the calls in sequence, using the Response of one to create the Request of the next using a Map.
This is actually a very common pattern.

Related

SQL Triggers alternative in DynamoDB

I have a list attribute in a table which is a moving window. I wish to create a trigger in DynamoDb such that whenever something is appended to the list it shifts by one, dropping the earliest value. If I were using SQL, create trigger would've been my go to, but what about DynamoDb?
AWS refers to it as a trigger in this document. Basically you write a Lambda function to do what you want. However, in your example you would have to be careful not to create an infinite loop where DynamoDB is updated, Lambda is called and updates DynamoDB and then your Lambda is called again. But this post actually calls this design pattern a database trigger.
Dynamo DB doesn't have anything like SQL's "Before Update" trigger.
DDB's Stream functionality, while often referred to and used like an "After Update" trigger...isn't really at all like a real RDMS SQL trigger.
If you really must use DDB, then you're stuck with fronting DDB with your own API that implements the logic you require.
I suppose as suggested by another answer you might carefully implement a DDB "trigger" lambda. But realize you're going to pay for 2 writes for every update instead of just 1. In addition, let say you want you list to have the most recent 10 items. Your apps would have to be prepared to see 11, or 12, or 13 sometimes. Since the the "trigger" is async from the actual DB writes.

BizTalk WCF-SQL composite operation for calling stored procedure updates, sequence?

We having a composite operation to invoke stored procedure to update a few tables. But running into some issues now potentially due to the sequence of event the updates are fired. Trying to understand how the composite operation work for WCF-SQL adapter, I know it is using one transaction context to execute the store procedures, but did it honor the sequence of rows when it comes to execute them? (e.g. run 1st row, then 2nd row then 3rd?) Environment is BizTalk2013 R2
Yes, the operations in a Composite Operation are executed in order and I've never had cause to doubt this.
Are you having a specific problem?

BizTalk WCF-SQL typed stored procedure response schema

Generating the response schema for a typed stored procedure, the stored procedure did some database updates prior to returning the final resultset. The response schema generated by Visual Studio has quite some garbage.
Is there a way to force it to generate a cleaner schema?
The StoredProcedureResultset4 is the only one that matters.
Here's my same answers from MSDN. Unfortunately, the marked Answer will not work for you since there is no way, or it's really, really hard, to capture and suppress result sets from a called Stored Procedure.
The cause is related to the Stored Procedure code.
The Wizard will only generate Schema types for elements that are returned in the response from SQL Server. Meaning, the Stored Procedure is emitting results for those updates so you're getting metadata for them.
The way to solve this is by modifying the SP code to not emit any result on any operation that shouldn't. Basically, if you see it in the result window in SQL Management Studio, you will get schema for it.
status and message are presumably the result of another SP so one way to suppress that is to assign the result to a temp table thus redirecting it form the output stream.
However, if StoredProcedureResultset4 is all that matters, that's all you have to use. There's nothing wrong with just ignoring all the other results provided they always appear in the same order.
Just to be clear, you still have to write the wrapper that suppresses the unwanted results, simply invoking the original SP from a new SP will not change the output, you'll still get the extra result sets.
In fact, a wrapper would be the harder implementation since you'd have to capture and examine all results sets which I don't think is possible.
The more correct way to do this in BizTalk would be a Port Map that strips the unwanted content.

data base look up functoid

I have one table which has 2 IDs.
Now I have to check id1 value of table1 and if it is equal to id of the destination schema
then i have to take the id2 from the table1 and assign it into second element in the destination schema..
How to do this using database lookup table functoid.
I believe Microsoft made a big mistake to include the database functoid. The reasons for this are:
The SQL code generated under the hood is not performant (run a sql trace and you will see). In fact more than one connection is sometimes created.
The request/response to SQL server will not be handled via the send port/adapter framework. So no enterprise-level servicing is available for the call (failure handling, retries, load balancing, etc).
From a design perspective, it obfuscates the db calling functionality inside of a xslt which is nasty.
However, you can achieve the same ends by making the call to the database outside of the map, and then passing the response message from the DB call into the map alongside your source message you want to transform. You can add as many input messages as needed in this way.
If you want details on how to create a multi input map: https://stackoverflow.com/a/7902710/569662

Passing whole dataset to stored procedure in MSSQL 2005

How do I pass a dataset object to a stored procedure? The dataset comprises multiple tables and I'll need to be able to access them from within the SQL.
You can use Table valued parameter for passing single table in SQL 2008 http://msdn.microsoft.com/en-us/library/bb675163.aspx
or
refer to this article and use SQL CLR procedure to pass dataset http://blogs.msdn.com/b/jpapiez/archive/2005/09/26/474059.aspx
It looks like you can do this with SQL Server 2008 or newer (at least with a DataTable). Here are the links:
http://www.eggheadcafe.com/community/aspnet/10/10138579/passing-dataset-to-stored-procedure.aspx
http://www.sqlteam.com/article/sql-server-2008-table-valued-parameters
As the article from MusiGenesis' answer states
In SQL Server 2005 and earlier, it is
not possible to pass a table variable
as a parameter to a stored procedure.
When multiple rows of data to SQL
Server need to send multiple rows of
data to SQL Server, developers either
had to send one row at a time or come
up with other workarounds to meet
requirements. While a VB.Net developer
recently informed me that there is a
SQLBulkCopy object available in .Net
to send multiple rows of data to SQL
Server at once, the data still can not
be passed to a stored proc.
At the risk of stating obvious here are two more approaches
Parametrize your processing procedure
You might re-evaluate if you truly and really need to pass a general table variable. While sometimes this can not be avoided the reason why this is a later addition to the set of features that MS SQL Server has is partially because usually you can get around it by structuring your stored procedures and the flow of your data processing.
If you are able to 'parametrize' your process then you should be able to let stored procedures retrieve full dataset based on a limited number of parameters.
This will make the process less flexible, but it will also make it more controlled, which is not a bad thing (similarly like the database which interfaces with applications only on the level of stored procedures is more robust, this approach also, by limiting the flexibility reduces the number of possible cases and consequently the number of possibly unhandeled cases. read: security holes and general bugs)
Temp tables
Besides the above there's always approach with temp tables, which can be more or less complicated, depending on the scope of sharing that you need on the data (sharing can be between db users, app users, connections, processes, etc..).
Nice side effect is that such approach would allow persistence of the process (which bring you closer to having undo, redo and ability to continue interrupted work).

Resources