I have a page that needs to combine data from four different webrequests into a single list of items. Currently, I'm running these sequentially, appending to a single list, then binding that list to my repeater.
However, I would like to be able to call these four webrequests asynchronously so that they can run simultaneously and save load time. Unfortunately, all the async tutorials and articles I've seen deal with a single request, using the finished handler to continue processing.
How can I perform the four (this might even increase!) simultaneously, keeping in mind that each result has to be fed into a single list?
many thanks!
EDIT: simplified example of what i'm doing:
var itm1 = Serialize(GetItems(url1));
list.AddRange(itm1);
var itm2 = Serialize(GetItems(url2));
list.AddRange(itm2);
string GetItems(string url)
{
var webRequest = WebRequest.Create(url) as HttpWebRequest;
var response = webRequest.GetResponse() as HttpWebResponse;
string retval;
using (var sr = new StreamReader(response.GetResponseStream()))
{ retval = sr.ReadToEnd(); }
return retval;
}
This should be really simple since your final data depends on the result of all the four requests.
What you can do is create 4 async delegates each pointing to the appropriate web method. Do a BeginInvoke on all of them. And then use a WaitHandle to wait for all. There is no need to use call backs, in your case, as you do not want to continue while the web methods are being processed, but rather wait till all web methods finish execution.
Only after all web methods are executed, will the code after the wait statement execute. Here you can combine the 4 results.
Here's a sample code I developed for you:
class Program
{
delegate string DelegateCallWebMethod(string arg1, string arg2);
static void Main(string[] args)
{
// Create a delegate list to point to the 4 web methods
// If the web methods have different signatures you can put them in a common method and call web methods from within
// If that is not possible you can have an List of DelegateCallWebMethod
DelegateCallWebMethod del = new DelegateCallWebMethod(CallWebMethod);
// Create list of IAsyncResults and WaitHandles
List<IAsyncResult> results = new List<IAsyncResult>();
List<WaitHandle> waitHandles = new List<WaitHandle>();
// Call the web methods asynchronously and store the results and waithandles for future use
for (int counter = 0; counter < 4; )
{
IAsyncResult result = del.BeginInvoke("Method ", (++counter).ToString(), null, null);
results.Add(result);
waitHandles.Add(result.AsyncWaitHandle);
}
// Make sure that further processing is halted until all the web methods are executed
WaitHandle.WaitAll(waitHandles.ToArray());
// Get the web response
string webResponse = String.Empty;
foreach (IAsyncResult result in results)
{
DelegateCallWebMethod invokedDel = (result as AsyncResult).AsyncDelegate as DelegateCallWebMethod;
webResponse += invokedDel.EndInvoke(result);
}
}
// Web method or a class method that sends web requests
public static string CallWebMethod(string arg1, string arg2)
{
// Code that calls the web method and returns the result
return arg1 + " " + arg2 + " called\n";
}
}
How about launching each request on their own separate thread and then appending the results to the list?
you can test this following code:
Parallel.Invoke(() =>
{
//TODO run your requests...
});
You need reference Parallel extensions :
http://msdn.microsoft.com/en-us/concurrency/bb896007.aspx
#Josh: Regarding your question about sending 4 (potentially more) asynchronous requests and keeping track of the responses (for example to feed in a list). You can write 4 requests and 4 response handlers, but since you will potentially have more requests, you can write an asynchronous loop. A classic for loop is made of an init, a condition, and an increment. You can break down a classic for loop into a while loop and still be equivalent. Then you make the while loop into a recursive function. Now you can make it asynchronous. I put some sample scripts here at http://asynchronous.me/ . In your case, select the for loop in the options. If you want the requests to be sent in sequence, i.e. one request after the previous response (request1, response1, request2, response2, request3, response3, etc.) then choose Serial communication (i.e. sequential), but the code is a bit more intricate. On the other hand, if you don't care about the order in which the responses are received (random order), then choose Parallel communication (i.e concurrent), the code is more intuitive. In either case, each response will be associated with its corresponding request by an identifier (a simple integer) so you can keep track of them all. The site will give you a sample script. The samples are written in JavaScript but it's applicable to any language. Adapt the script to your language and coding preferences. With that script, your browser will send 4 requests asynchronously, and by the identifier you'll be able to keep track of which request the response corresponds to. Hope this helps. /Thibaud Lopez Schneider
Related
[WebMethod]
public static string LoadAccount()
{
address = new EndpointAddress(objClientSession.ServiceURL);
proxy = new PMToolServices.MyAppServiceClient(binding, address);
//Now call the web service to get the accounts
proxy.wsGetAccountsCompleted += new EventHandler<MyAppServices.wsGetAccountsCompletedEventArgs>(proxy_wsGetAccountsCompleted);
proxy.wsGetAccountsAsync();
return strAccountList;
}
I am calling LoadAccount WebMethod using ajax. In LoadAccount I have added callback proxy_wsGetAccountsCompleted to wsGetAccounts of WCF. In proxy_wsGetAccountsCompleted I'm building result to return to LoadAccount.
Issues:
I'm unable to return result directly from 'proxy_wsGetAccountsCompleted' so I've stored that result in global defined string and then at the end of LoadAccout WebMethod returning that. Can I return that directly from proxy_wsGetAccountsCompleted.
When I call LoadAccount WebMethod first time it returning blank result and if I call again second time then I get correct result. Even though as sequence I'm returning that global defined string after proxy_wsGetAccountsCompleted above it. Is that right?
Confused about sequence/return response between:
proxy.wsGetAccountsAsync();
proxy_wsGetAccountsCompleted();
return strAccountList
You are doing something strange: calling a wcf operation that is synchonous, that calls an asynchonous operations. Of course the first time it will not works.
LoadAccount() returns prior to get the wsGetAccountsAsync() result. You can either call wsGetAccountsAsync in synchonous way or use an asynchonous operation, like using Signal R.
Remember that when call the operation by the 2nd time, you will get result from previous request, if your method was accepting some parameter, you will store a wrong value, that is the response for your previous request.
I am currently building a web api service using MVC and I am creating the endpoints. For example, my GET endpoint will execute a stored procedure and return the data in JSON format. The model of the data returned can vary in the future and it seems like using a dynamic return type would remove the need of having to change the model and mapping every time that happens. Basically, here is some sample code, do you notice any malpractices in my implementation?
[System.Web.Mvc.HttpGet]
[Route("companies/{id}")]
public dynamic GetCompany([FromUri] int id, string userId)
{
var parameters = new Hashtable
{
{"UserID", userId},
{"CompanyID", id}
};
var result = MyDB.ExecuteSp(CompanyReadByIdSp, parameters);
return result;
}
In fact, this would enable me to transform the object and add whatever I want to it without needing to worry about the model. Is this a bad way of doing things? Thanks ahead.
I am working on a Windows Store (C++) app. This is a method that reads from the database using the web service.
task<std::wstring> Ternet::GetFromDB(cancellation_token cancellationToken)
{
uriString = ref new String(L"http://myHost:1234/RestServiceImpl.svc/attempt");
auto uri = ref new Windows::Foundation::Uri(Helpers::Trim(uriString));
cancellationTokenSource = cancellation_token_source();
return httpRequest.GetAsync(uri, cancellationTokenSource.get_token()).then([this](task<std::wstring> response)->std::wstring
{
try
{
Windows::UI::Popups::MessageDialog wMsg(ref new String(response.get().c_str()), "success");
wMsg.ShowAsync();
return response.get();
}
catch (const task_canceled&)
{
Windows::UI::Popups::MessageDialog wMsg("Couldn't load content. Check internet connectivity.", "Error");
wMsg.ShowAsync();
std::wstring abc;
return abc;
}
catch (Exception^ ex)
{
Windows::UI::Popups::MessageDialog wMsg("Couldn't load content. Check internet connectivity.", "Error");
wMsg.ShowAsync();
std::wstring abc;
return abc;
}
} , task_continuation_context::use_current());
}
I'm confused how to return the received data to the calling function. Now, I am calling this function in the constructor of my data class like this:
ternet.GetFromDB(cancellationTokenSource.get_token()).then([this](task<std::wstring> response)
{
data = ref new String(response.get().c_str());
});
I am getting a COM exception whenever I try to receive the returned data from GetFromDB(). But this one runs fine:
ternet.GetFromDB(cancellationTokenSource.get_token());
Please suggest a better way of chaining the completion of GetFromDB to other code. And how to get the returned value from inside the try{} block of GetFromDB() 's then. Please keep in mind I am a very new student of async programming.
If the continuation of the call to GetFromDB is happening on the UI thread (which I believe it will by default, assuming the call site you pasted is occurring in the UI thread), then calling get() on the returned task will throw an exception. It won't let you block the UI thread waiting for a task to finish.
Two suggestions, either of which should fix that problem. The first should work regardless, while the second is only a good option if you're not trying to get the response string to the UI thread (to be displayed, for example).
1) Write your continuations (lambdas that you pass to then) so that they take the actual result of the previous task, rather than the previous task itself. In other words, instead of writing this:
ternet.GetFromDB(...).then([this](task<std::wstring> response) { ... });
write this:
ternet.GetFromDB(...).then([this](std::wstring response) { ... });
The difference with the latter is that the continuation machinery will call get() for you (on a background thread) and then give the result to your continuation function, which is a lot easier all around. You only need to have your continuation take the actual task as an argument if you want to catch exceptions that might have been thrown by the task as it executed.
2) Tell it to run your continuation on a background/arbitrary thread:
ternet.GetFromDB(...).then([this](task<std::wstring> response) { ... }, task_continuation_context::use_arbitrary());
It won't care if you block a background thread, it only cares if you call get() on the UI thread.
I'm developing a Flex application and am having some trouble working with asynchronous calls. This is what I would like to be able do:
[Bindable] var fooTypes : ArrayCollection();
for each (var fooType : FooType in getFooTypes()) {
fooType.fooCount = getFooCountForType(fooType);
itemTypes.addItem(fooType);
}
The issue I'm running into is that both getFooTypes and getFooCountForType are asynchronous calls to a web service. I understand how to populate fooTypes by setting a Responder and using ResultEvent, but how can I call another service using the result? Are there any suggestions/patterns/frameworks for handling this?
If possible, I Strongly recommed re-working your remote services to return all the data you need in one swoop.
But, if you do not feel that is possible or practical for whatever reason, I would recommend doing some type of remote call chaining.
Add all the "remote calls" you want to make in array. Call the first one. In the result handler process the results and then pop the next one and call it.
I'm a bit unclear from your code sample when you are calling the remote call, but I assume it part of the getFooCountForType method. Conceptually I would do something like this. Define the array of calls to make:
public var callsToMake : Array = new Array();
cache the currently in process fooType:
public var fooType : FooType;
Do your loop and store the results:
for each (var fooType : FooType in getFooTypes()) {
callsToMake.push(fooType);
// based on your code sample I'm unclear if adding the fooTypes to itemTypes is best done here or in the result handler
itemTypes.addItem(fooType);
}
Then call the remote handler and save the foo you're processing:
fooType = callsToMake.pop();
getFooCountForType(fooTypeToProcess);
In the result handler do something like this:
// process results, possibly by setting
fooType.fooCount = results.someResult;
and call the remote method again:
fooType = callsToMake.pop();
getFooCountForType(fooTypeToProcess);
I have an ASP.NET application with a lot of dynamic content. The content is the same for all users belonging to a particular client. To reduce the number of database hits required per request, I decided to cache client-level data. I created a static class ("ClientCache") to hold the data.
The most-often used method of the class is by far "GetClientData", which brings back a ClientData object containing all stored data for a particular client. ClientData is loaded lazily, though: if the requested client data is already cached, the caller gets the cached data; otherwise, the data is fetched, added to the cache and then returned to the caller.
Eventually I started getting intermittent crashes in the the GetClientData method on the line where the ClientData object is added to the cache. Here's the method body:
public static ClientData GetClientData(Guid fk_client)
{
if (_clients == null)
_clients = new Dictionary<Guid, ClientData>();
ClientData client;
if (_clients.ContainsKey(fk_client))
{
client = _clients[fk_client];
}
else
{
client = new ClientData(fk_client);
_clients.Add(fk_client, client);
}
return client;
}
The exception text is always something like "An object with the same key already exists."
Of course, I tried to write the code so that it just wasn't possible to add a client to the cache if it already existed.
At this point, I'm suspecting that I've got a race condition and the method is being executed twice concurrently, which could explain how the code would crash. What I'm confused about, though, is how the method could be executed twice concurrently at all. As far as I know, any ASP.NET application only ever fields one request at a time (that's why we can use HttpContext.Current).
So, is this bug likely a race condition that will require putting locks in critical sections? Or am I missing a more obvious bug?
If an ASP.NET application only handles one request at a time all ASP.NET sites would be in serious trouble. ASP.NET can process dozens at a time (typically 25 per CPU core).
You should use ASP.NET Cache instead of using your own dictionary to store your object. Operations on the cache are thread-safe.
Note you need to be sure that read operation on the object you store in the cache are threadsafe, unfortunately most .NET class simply state the instance members aren't thread-safe without trying to point any that may be.
Edit:
A comment to this answer states:-
Only atomic operations on the cache are thread safe. If you do something like check
if a key exists and then add it, that is NOT thread safe and can cause the item to
overwritten.
Its worth pointing out that if we feel we need to make such an operation atomic then the cache is probably not the right place for the resource.
I have quite a bit of code that does exactly as the comment describes. However the resource being stored will be the same in both places. Hence if an existing item on rare occasions gets overwritten the only the cost is that one thread unnecessarily generated a resource. The cost of this rare event is much less than the cost of trying to make the operation atomic every time an attempt to access it is made.
This is very easy to fix:
private _clientsLock = new Object();
public static ClientData GetClientData(Guid fk_client)
{
if (_clients == null)
lock (_clientsLock)
// Check again because another thread could have created a new
// dictionary in-between the lock and this check
if (_clients == null)
_clients = new Dictionary<Guid, ClientData>();
if (_clients.ContainsKey(fk_client))
// Don't need a lock here UNLESS there are also deletes. If there are
// deletes, then a lock like the one below (in the else) is necessary
return _clients[fk_client];
else
{
ClientData client = new ClientData(fk_client);
lock (_clientsLock)
// Again, check again because another thread could have added this
// this ClientData between the last ContainsKey check and this add
if (!clients.ContainsKey(fk_client))
_clients.Add(fk_client, client);
return client;
}
}
Keep in mind that whenever you mess with static classes, you have the potential for thread synchronization problems. If there's a static class-level list of some kind (in this case, _clients, the Dictionary object), there's DEFINITELY going to be thread synchronization issues to deal with.
Your code really does assume only one thread is in the function at a time.
This just simply won't be true in ASP.NET
If you insist on doing it this way, use a static semaphore to lock the area around this class.
you need thread safe & minimize lock.
see Double-checked locking (http://en.wikipedia.org/wiki/Double-checked_locking)
write simply with TryGetValue.
public static object lockClientsSingleton = new object();
public static ClientData GetClientData(Guid fk_client)
{
if (_clients == null) {
lock( lockClientsSingleton ) {
if( _clients==null ) {
_clients = new Dictionary``();
}
}
}
ClientData client;
if( !_clients.TryGetValue( fk_client, out client ) )
{
lock(_clients)
{
if( !_clients.TryGetValue( fk_client, out client ) )
{
client = new ClientData(fk_client)
_clients.Add( fk_client, client );
}
}
}
return client;
}