BizTalk WCF-SQL typed stored procedure response schema - biztalk

Generating the response schema for a typed stored procedure, the stored procedure did some database updates prior to returning the final resultset. The response schema generated by Visual Studio has quite some garbage.
Is there a way to force it to generate a cleaner schema?
The StoredProcedureResultset4 is the only one that matters.

Here's my same answers from MSDN. Unfortunately, the marked Answer will not work for you since there is no way, or it's really, really hard, to capture and suppress result sets from a called Stored Procedure.
The cause is related to the Stored Procedure code.
The Wizard will only generate Schema types for elements that are returned in the response from SQL Server. Meaning, the Stored Procedure is emitting results for those updates so you're getting metadata for them.
The way to solve this is by modifying the SP code to not emit any result on any operation that shouldn't. Basically, if you see it in the result window in SQL Management Studio, you will get schema for it.
status and message are presumably the result of another SP so one way to suppress that is to assign the result to a temp table thus redirecting it form the output stream.
However, if StoredProcedureResultset4 is all that matters, that's all you have to use. There's nothing wrong with just ignoring all the other results provided they always appear in the same order.
Just to be clear, you still have to write the wrapper that suppresses the unwanted results, simply invoking the original SP from a new SP will not change the output, you'll still get the extra result sets.
In fact, a wrapper would be the harder implementation since you'd have to capture and examine all results sets which I don't think is possible.
The more correct way to do this in BizTalk would be a Port Map that strips the unwanted content.

Related

How to view partial results in spatialite-gui?

I'm running a query to create a table using the spatialite gui on my Windows 7 machine. It has been going for days and I would like to cancel it and try something different. Is there a way for me to view the results of the query so far? The .sqlite file has trippled in size and I'm curious about what is happening.
In SQLite, transactions are atomic and isolated. (And if you do not use explicit transactions, every command gets an automatic transaction.)
So there is no easy way to see partial results; the database goes to great efforts to ensure that the transaction either succeeds completely, or is rolled back.
If possible, try the command with a smaller data set, or write the query so that only a part of the data is processed.

Schema editor collection sampling is missing fields

I am attempting to use the ODBC Schema Editor to connect to several Cosmos DB collections for reporting purposes (using Power BI). While I can successfully generate a schema for one collection, another is not working correctly.
The Document in question includes a request object. Within request there should be multiple fields. When I sample my collection in Schema Editor, the resulting schema is missing any array of objects (or anything that includes an array of objects) that should be included under the request object – they are just not listed in the resulting schema. Several others are properly split out into their own tables, but the tables are always empty when the schema is applied (this is not reflective of the underlying data – I would expect to see things in those tables). Behavior does not change if the same collection is re-sampled.
Here's an example:
JSON selection
Does anyone know how I can get the schema editor to recognize all of my data? I'm not sure what to share that would be helpful but I'm happy to provide more if there's something that would be informative.
EDIT: Unless I'm misunderstanding how to query Cosmos DB, it seems that I'm seeing the issue show up even if I try to query the data directly through Data Explorer. In the below, you can see if I select c.request.preparedBy that preparedBy has a property mail:
preparedBy
However, if I try to query c.request.preparedBy.mail directly then I see nothing but blanks, which is exactly what appeared in the Schema Editor:
preparedBy.mail
Thinking that maybe there was a limit to how many layers of depth I could query, I tried selecting from request instead of the entire collection. Interestingly, even though I see preparedBy when I select * from request, request.preparedBy again returns nothing but empty braces.

BizTalk SQL Adapter composite operation, chaining stored procedure calls

We need to call three stored procedures on the same database, thinking to use composite operation to wrap them in the same call of the same transaction.
Question is, we need the result of first stored procedure to be used as the input for the 2nd and 3rd procedure, is this doable?
Thanks
No, unfortunately not. The map will run and create the XML that the SQL adapter will use to execute afterwards.
You could look at making a two-way send port that only runs the first stored procedure; and another send port that subscribes to the response of the first send port and runs the second and third procedures.
This is not possible I'm afraid.
The input of your composite operation is an XML instance, where every input parameter is supplied before hand.
If its really necessary to execute these particular stored procedures, you can try wrapping them into one, custom stored procedure, where you are free to to what you want.
One could also try merging the logic from these 3 stored procedures into one new one. Try to think about scalar functions, table types, table valued functions and so on. SQL server has quite the arsenal to let you do what you want.
Yes, you absolutely can to that, but you would not use a Composite Operation.
You would use an Orchestration that performs the calls in sequence, using the Response of one to create the Request of the next using a Map.
This is actually a very common pattern.

Generate and serve a file in the server on demand. Best way to do it without consume too much resources?

My application has to export the result of a stored procedure to .csv format. Basically, the client performs a query, and he can see the results on a paged grid, if it contains what he wants, then he clicks on a "Export to CSV" button and he downloads the whole thing.
The server will have to run a stored procedure that will return the full result without paging, create the file and return it to the user.
The result file could be very large, so I'm wondering what is the best way to create this file in the server on demand and serve it to the client without blow up the server memory or resources.
The easiest way: Call the stored procedure with LINQ, create a stream and iterate over the result collection and creating a line in the file per collection item.
Problem 1: Does the deferred execution applies as well to LINQ to stored procedures? (I mean, will .NET try to create a collection with all the items in the result set in memory? or will it give me the result item by item if I do an iteration instead of a .ToArray?)
Problem 2: Is that stream kept in RAM memory till I perform a .Dispose/.Close?
The not-so-easy way: Call the stored procedure with a IDataReader and per each line, write directly to the HTTP response stream. It looks like a good approach, as long as I read I write to the response, the memory is not blown up.
Is it really worth it?
I hope I have explained myself correctly.
Thanks in advance.
Writing to a stream is the way to go, as it will rougly consume not more than the current "record" and associated memory. That stream can be a FileStream (if you create a file) or the ASP.NET stream (if you write directly to the web), or any other useful stream.
The advantage of creating a file (using the FileStream) is to be able to cache the data to serve the same request over and over. Depending on your need, this can be a real benefit. You must come up with an intelligent algorithm to determine the file path & name from the input. This will be the cache key. Once you have a file, you can use the TransmitFile api which leverages Windows kernel cache and in general very efficient. You can also play with HTTP client caches (headers like last-modified-since, etc.), so next time the client request the same information, you may return a not modified (HTTP 304 status code) response. The disadvantages of using cache files is you will need to manage these files, disk space, expiration, etc.
Now, Linq or IDataReader should not change much about perf or memory consumption provided you don't use Linq method that materialize the whole data (exhaust the stream) or a big part of it. That means you will need to avoid ToArray(), ToList() methods and other methods like this, and concentrate only on "streamed" methods (enumerations, skips, while, etc.).
I know I'm late to the game on here, but theoretically how many records are we talking here? I saw 5000 being thrown around, and if its around there that shouldn't be a problem for your server.
Answering the easiest way:
It does unless you specify otherwise (you disable lazy loading).
Not sure I get what you're asking here. Are you referring to a streamreader you'd be using for creating the file, or the datacontext you are using to call the SP? I believe the datacontext will clean up for you after you're done (always good practice to close anyway). Streamreader or the like will need a dispose method run to remove from memory.
That being said, when dealing with file exports I've had success in the past building the Table (CSV) programmatically (via iteration), then sending the structured data as an HTTP response with the type specified in the header, the not so easy way as you so eloquently stated :). Heres a question that asks how to do that with CSV:
Response Content type as CSV
"The server will have to run a stored procedure that will return the full result without paging..."
Perhaps not, but I believe you'll need Silverlight...
You can set up a web service or controller that allows you to retrieve data "by the page" (much like just calling a 'paging' service using GridView or other repeater). You can make async calls from silverlight to get each "page" of data until completed, then use the SaveFileDialog to save to the harddisk.
Hope this helps.
Example 1 |
Example 2
What you're talking about isn't really deferred execution, but limiting the results of a query. When you say objectCollection.Take(10), the SQL that is generated when you iterate the enumerable only takes the top 10 results of that query.
That being said, a stored procedure will return whatever results you are passing back, whether its 5 or 5000 rows of data. Performing a .Take() on the results won't limit what the database returns.
Because of this, my recommendation (if possible for your scenario), is to add paging parameters to your stored procedure (page number, page size). This way, you will only be returning the results you plan to consume. Then when you want the full list for your CSV, you can either pass a large page size, or have NULL values mean "Select all".

Using Dataset returned when using ExecuteDataSet

I am using Enterprise block and not able to figure this out.
I am using oracle procedure for inserting records into the database from my asp.net application in VB.net
Though it is inserting records as it should When I try to access the dataset returned I am not able to see the just inserted record details.
In my Oracle procedure I have Output Cursor which should return several column values from the just inserted record.
Please help.
This is a bit of a work around to what you're currently doing, but if you're still having issues with this, I'd suggest running ExecuteNonQuery for inserting and then ExecuteDataTable with the data you supplied to call a SELECT on your data.
Keep in mind, however, that this method's performance may be a bit slower (DB call to insert, followed by a DB call and return to select the data), but you will not need to worry about your cursor anymore (not sure what kind of performance gain, if any, this might have).

Resources