LogicApps Web API making it Asynchronous - asynchronous

I had to write a Web API to insert data into custom on-premise DB and then call a stored procedure for LogicApps to use. The LogicApps' call timeouts when passing large amopnts of data. So I'm trying to use this solution I found here:
LogicAppsAsyncResponseSample
So I would basically put all my code into the doWork like this:
foreach (var record in records)
{
...
//Insert record
cmd.ExecuteNonQuery();
}
...
//Call SP
cmd.ExecuteNonQuery();
runningTasks[id] = true;
My question is should I make my code in doWork, asynchronous? Use Await as needed and ExecuteNonQueryAsync instead of ExecuteNonQuery and add AsynchronousProcessing to my connection string?
Alternatively, too I was actually considering writing this to be "Fire and Forget". Meaning I would start a thread in my API to call doWork as in the sample and return OK instead of Accepted right away. Then I wouldn't need to store thread statuses or have the chekcStatus method. This is OK for me since the API can send alerts if anything fails. The only advantage to the noted sample is I can eventually return something to LogicApps indicating success or not and show it in my LogicApps' log (one place to see all). Is "Fire and Forget" a sound practice?
FYI: the call to dowork in the sample is:
new Thread(() => doWork(id)).Start();

Related

In Disassembler pipeline component - Send only last message out from GetNext() method

I have a requirement where I will be receiving a batch of records. I have to disassemble and insert the data into DB which I have completed. But I don't want any message to come out of the pipeline except the last custom made message.
I have extended FFDasm and called Disassembler(), then we have GetNext() which is returning every debatched message out and they are failing as there is subscribers. I want to send nothing out from GetNext() until Last message.
Please help if anyone have already implemented this requirement. Thanks!
If you want to send only one message on the GetNext, you have to call on Disassemble method to the base Disassemble and get all the messages (you can enqueue this messages to manage them on GetNext) as:
public new void Disassemble(IPipelineContext pContext, IBaseMessage pInMsg)
{
try
{
base.Disassemble(pContext, pInMsg);
IBaseMessage message = base.GetNext(pContext);
while (message != null)
{
// Only store one message
if (this.messagesCount == 0)
{
// _message is a Queue<IBaseMessage>
this._messages.Enqueue(message);
this.messagesCount++;
}
message = base.GetNext(pContext);
}
}
catch (Exception ex)
{
// Manage errors
}
Then on GetNext method, you have the queue and you can return whatever you want:
public new IBaseMessage GetNext(IPipelineContext pContext)
{
return _messages.Dequeue();
}
The recommended approach is to publish messages after disassemble stage to BizTalk message box db and use a db adapter to insert into database. Publishing messages to message box and using adapter will provide you more options on design/performance and will decouple your DB insert from receive logic. Also in future if you want to reuse the same message for something else, you would be able to do so.
Even then for any reason if you have to insert from pipeline component then do the following:
Please note, GetNext() method of IDisassembler interface is not invoked until Disassemble() method is complete. Based on this, you can use following approach assuming you have encapsulated FFDASM within your own custom component:
Insert all disassembled messages in disassemble method itself and enqueue only the last message to a Queue class variable. In GetNext() message then return the Dequeued message, when Queue is empty return null. You can optimize the DB insert by inserting multiple rows at a time and saving them in batches depending on volume. Please note this approach may encounter performance issues depending on the size of file and number of rows being inserted into db.
I am calling DBInsert SP from GetNext()
Oh...so...sorry to say, but you're doing it wrong and actually creating a bunch of problems doing this. :(
This is a very basic scenario to cover with BizTalk Server. All you need is:
A Pipeline Component to Promote BTS.InterchageID
A Sequential Convoy Orchestration Correlating on BTS.InterchangeID and using Ordered Delivery.
In the Orchestration, call the SP, transform to SOAP, call the SOAP endpoint, whatever you need.
As you process the Messages, check for BTS.LastInterchagneMessage, then perform your close out logic.
To be 100% clear, there are no practical 'performance' issues here. By guessing about 'performance' you've actually created the problem you were thinking to solve, and created a bunch of support issues for later on, sorry again. :( There is no reason to not use an Orchestration.
As noted, 25K records isn't a lot. Be sure to have the Receive Location and Orchestration in different Hosts.

Can a thread in ASP.NET work keep continue after Response.End?

I want to make a tcp connection to a device and keep continously retrieve data from device. I want to start this with a simple request and keep it working background even Page response completed. Is this possible in asp.net?
Can a thread in ASP.NET work keep continue after Response.End?
Yes, you can if you do not care or do not need the result.
For example, in the following code, you call AddLogAsync and insert a log, but you not care whether insert successful or not.
public Task AddLogAsync(Log log)
{
return Task.Run(() => AddLog(log));
}
private void AddLog(TraceLog traceLog)
{
// Do something here.
}
I want to make a tcp connection to a device and keep continously
retrieve data from device. I want to start this with a simple request
and keep it working. Is this possible in asp.net?
I'm not really understanding above question. After Response.End, you cannot return anything, although you can continue work on something in different thread.

Database broadcast in SignalR

I've implemented the tutorial here Server Broadcast with SignalR and my next step is to hook it up to an SQL db via EF code first.
In the StockTicker class the authors write the following code:
foreach (var stock in _stocks.Values)
{
if (TryUpdateStockPrice(stock))
{
BroadcastStockPrice(stock);
}
}
and the application I am working on needs a real time push news feed with a small audience (around 300 users). What would be the disadvantages of me simply doing something like (pseudo):
foreach (var message in _db.Messages.Where(x => x.Status == "New")
{
BroadcastMessage(message)
}
and what would be the best way to update each message status in the DB to != New without totally compromising performance?
I think the best way to determine whether or not your simple solution compromises performance too much is to try it out.
Something like the following should work for updating each message status.
foreach (var message in _db.Messages.Where(x => x.Status == "New"))
{
BroadcastMessage(message);
message.Status = "Read";
}
_db.SubmitChanges();
If you find this is too inefficient, you could always write a stored procedure that will select new messages and mark them as read.
It might be better to fine tune performance by adjusting the rate you are polling the database and batching messages so you broadcast a single message via SignalR for each DB query even when the DB returns multiple new messages.
If you decide to go stored proc route, here is another fairly in-depth article about using them with EF: http://msdn.microsoft.com/en-us/data/gg699321.aspx

Why does meteor undo changes to collections nested in an observer method?

I am trying to implement something like this:
/* We use the command pattern to encode actions in
a 'command' object. This allows us to keep an audit trail
and is required to support 'undo' in the client app. */
CommandQueue.insert(command);
/* Queuing a command should trigger its execution. We use
an observer for this. */
CommandQueue
.find({...})
.observe({
added: function(command) {
/* While executing the action encoded by 'command'
we usually want to insert objects into other collections. */
OtherCollection.insert(...)
}
});
Unfortunately it seems that meteor keeps the prior state of the OtherCollection while executing the transaction on CommandQueue. Changes are made temporarily to the OtherCollection. As soon as the transaction on CommandQueue finishes, the prior state of the OtherCollection will be restored, though, and our changes disappear.
Any ideas why this is happening? Is this intended behaviour or a bug?
This is the expected behavior, though it is a little subtle, and not guaranteed (just an implementation detail).
The callback to observe fires immediately when the command is inserted into CommandQueue. So the insert to OtherCollection happens while the CommandQueue.insert method is running, as part of the same call stack. This means the OtherCollection insert is considered part of the local 'simulation' of the CommandQueue insert, and is not sent to the server. The server runs the CommandQueue insert and sends the result back, at which point the client discards the results of the simulation and applies the results sent from the server, making the OtherCollection change disappear.
A better way to do this would be to write a custom method. Something like:
Meteor.methods({
auditedCommand: function (command) {
CommandQueue.insert(command);
var whatever = someProcessing(command)
OtherCollection.insert(whatever);
}
});
Then:
Meteor.call('auditedCommand', command);
This will show up immediately on the client (latency compensation) and is more secure as clients can't insert to CommandQueue without also adding to OtherCollection.
EDIT: this will probably change. The added callback shouldn't really be considered part of the local simulation of CommandQueue.insert. Thats just the way it works now. That said, a custom method is probably still a better approach for this, it will work even if other people add commands to the command queue, and is more secure.
I'm not sure about your observe behavior but we accomplished the same thing using a server-side allow method:
CommandQueue.allow ({
insert: function (userId, doc) {
OtherCollection.insert(...);
return (userId && doc.owner === userId);
}
});
This is also more secure than putting this logic client side.

Any side-effects using SqlParameterCollection.Clear Method?

I have this specific situation where I need to execute a stored procedure 3 times before I declare it failed. Why 3 times because to check if a job that was started earlier finished. I am going to ask a separate question for deciding if there is a better approach. But for now here is what I am doing.
mysqlparametersArray
do{
reader = MyStaticExecuteReader(query,mysqlparametersArray)
Read()
if(field(1)==true){
return field(2);
}
else{
//wait 1 sec
}
}while(field(1)==false);
MyStaticExecuteReader(query,mysqlparametersArray)
{
//declare command
//loop through mysqlparametersArray and add it to command
//ExecuteReader
return reader
}
Now this occasionally gave me this error:
The SqlParameter is already contained by another
SqlParameterCollection.
After doing some search I got this workaround to Clear the parameters collection so I did this:
MyStaticExecuteReader(query,mysqlparametersArray)
{
//declare command
//loop through mysqlparametersArray and add it to command Parameters Collection
//ExecuteReader
command.Parameters.Clear()
return reader
}
Now I am not getting that error.
Question: Is there any side-effect using .Clear() method above?
Note: Above is a sample pseudo code. I actually execute reader and create parameters collection in a separate method in DAL class which is used by others too. So I am not sure if making a check if parameters collection is empty or not before adding any parameters is a good way to go.
I have not ran into any side effects when I have used this method.
Aside from overhead or possibly breaking other code that is shared, there is no issue with clearing parameters.

Resources