How to abort old request processing when new request arrives on ASP.NET MVC 5? - asp.net

I have a form with hundreds of check boxes and dropdown menus (Which value of many of them are coupled together). In the action there is updating mechanism to update an object in Session. This object does all validation and coupling of values, for example if user types %50 in one input filed, we might add 3 new SelectListItem to a dropdown.
Everything works fine, but if use starts to clicking on check boxes very quick (which is the normal case in our scenario), controller get multiple posts while it is processing previous ones. Fortunately we are only interested in the last POST, so we need a way to abort\cancel on going requests when newer request from same form comes.
What I tried:
1- blocking client side to make multiple posts when server still working on previous one. It is not desirable because it makes noticeable pauses on browser side.
2- There are several solutions for blocking multiple post backs by using HASH codes or AntiForgeryToken. But they don't what I need, I need to abort on-going thread in favor of new request, not blocking incoming request.
3- I tried to extend pipeline by adding two message handlers (one before action and another after executing action) to keep a hash code (or AntiForgeryToken) but problem is still there, even I can detect there is on-going thread working on same request, I have no way to abort that thread or set older request to Complete.
Any thoughts?

The only thing you can do is throttle the requests client-side. Basically, you need to set a timeout when a checkbox is clicked. You can let that initial request go through, but then any further requests are queued (or actually dropped after the first queued request in your scenario) and don't run that until the timeout clears.
There's no way to abort a request server-side. Each request is idempotent. There is no inherent knowledge of anything that's happened before or since. The server has multiple threads fielding requests and will simply process those as fast as it can. There's no order to how the requests are processed or how responses are sent out. The first request could be the third one that receives a response, simply due to how the processing of each request goes.

You are trying to implement transactional functionality (i.e. counting only the last request) over an asynchronous technology. This is a design flaw.
Since you refuse to block on the client side, you have no method by which to control which requests process first, OR to correctly process the outcome again on the client-side.
You might actually run into this scenario:
Client sends Request A
Server starts processing Request B
Client sends Request B
Server starts processing Request B
Server returns results of Request B, and client changes accordingly
Server returns results of Request A, and client changes accordingly (and undoes prior changes resulting from Request B)
Blocking is the only way you can ensure the correct order.

Thanks for your help #xavier-j.
After playing around this, I wrote this. Hope it be useful for someone who needs same thing:
First you need add this ActionFilter
public class KeepLastRequestAttribute : ActionFilterAttribute
{
public string HashCode { get; set; }
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
base.OnActionExecuting(filterContext);
Dictionary<string, CancellationTokenSource> clt;
if (filterContext.HttpContext.Application["CancellationTokensDictionary"] != null)
{
clt = (Dictionary<string, CancellationTokenSource>)filterContext.HttpContext.Application["CancellationTokensDictionary"];
}
else
{
clt = new Dictionary<string, CancellationTokenSource>();
}
if (filterContext.HttpContext.Request.Form["__RequestVerificationToken"] != null)
{
HashCode = filterContext.HttpContext.Request.Form["__RequestVerificationToken"];
}
CancellationTokenSource oldCt = null;
clt.TryGetValue(HashCode, out oldCt);
CancellationTokenSource ct = new CancellationTokenSource();
if (oldCt != null)
{
oldCt.Cancel();
clt[HashCode] = ct;
}
else
{
clt.Add(HashCode, ct);
}
filterContext.HttpContext.Application["CancellationTokensDictionary"] = clt;
filterContext.Controller.ViewBag.CancellationToken = ct;
}
public override void OnResultExecuted(ResultExecutedContext filterContext)
{
base.OnResultExecuted(filterContext);
if (filterContext.Controller.ViewBag.ThreadHasBeenCanceld == null && filterContext.HttpContext.Application["CancellationTokensDictionary"] != null) {
lock (filterContext.HttpContext.Application["CancellationTokensDictionary"])
{
Dictionary<string, CancellationTokenSource> clt = (Dictionary<string, CancellationTokenSource>)filterContext.HttpContext.Application["CancellationTokensDictionary"];
clt.Remove(HashCode);
filterContext.HttpContext.Application["CancellationTokensDictionary"] = clt;
}
}
}
}
I am using AntiForgeryToken here as key token, you can add your own custom hash code to have more control.
In the controller you will have something like this
[HttpPost]
[KeepLastRequest]
public async Task<ActionResult> DoSlowJob(CancellationToken ct)
{
CancellationTokenSource ctv = ViewBag.CancellationToken;
CancellationTokenSource nct = CancellationTokenSource.CreateLinkedTokenSource(ct, ctv.Token, Response.ClientDisconnectedToken);
var mt = Task.Run(() =>
{
SlowJob(nct.Token);
}, nct.Token);
await mt;
return null;
}
private void SlowJob(CancellationToken ct)
{
for (int i = 0; i < 10; i++)
{
Thread.Sleep(200);
if (ct.IsCancellationRequested)
{
this.ViewBag.ThreadHasBeenCanceld = true;
System.Diagnostics.Debug.WriteLine("cancelled!!!");
break;
}
System.Diagnostics.Debug.WriteLine("doing job " + (i + 1));
}
System.Diagnostics.Debug.WriteLine("job done");
return;
}
And finally in your JavaScript you need to abort ongoing requests, otherwise browser blocks new requests.
var onSomethingChanged = function () {
if (currentRequest != null) {
currentRequest.abort();
}
var fullData = $('#my-heavy-form :input').serializeArray();
currentRequest = $.post('/MyController/DoSlowJob', fullData).done(function (data) {
// Do whatever you want with returned data
}).fail(function (f) {
console.log(f);
});
currentRequest.always(function () {
currentRequest = null;
})
}

Related

Triggering a fallback using #HystrixProperty timeout for HTTP status codes and other exceptions

I have a function in my #Service class that is marked with #HystrixCommand.
This method acts as a client which sends a request to another service URL and gets back a response.
What I want to do is to trigger a fallback function when the response status code is anything other than 200. It will also trigger a fallback for any other exceptions (RuntimeExceptions etc.).
I want to do this by making use of the #HystrixProperty or #HystrixCommandProperty.
I want the client to ping the URL and listen for a 200 response status and if it does not get back a 200 status within a certain time-frame I want it to fallback.
If it gets back a 200 status normally within a certain time it should not trigger the fallback.
#HystrixCommand(fallbackMethod="fallbackPerformOperation")
public Future<Object> performOperation(String requestString) throws InterruptedException
return new AsyncResult<Object>() {
#Override
public Object invoke() {
Client client = null;
WebResource webResource = null;
ClientResponse response =null;
String results = null;
try{
client = Client.create();
webResource = client.resource(URL);
client.setConnectTimeout(10000);
client.setReadTimeout(10000);
response = webResource.type("application/xml")
.post(ClientResponse.class, requestString);
} finally {
client.destroy();
webResource = null;
}
return results;
}
};
}
I specifically want to make use of the #HystrixProperty or #HystrixCommandProperty so performing a check inside the method for response status code not being 200 and then throwing an Exception is not acceptable.
Instead of using Annotations will creating my own Command by extending the HystrixCommand Interface work?
Any ideas or resources for where I can start with this are more than welcome.
I don’t understand why you don’t want to check the response http status code and throw an exception if it is not 200? Doing that will give you the behaviour you desire. i.e. it will trigger a fall back for exceptions or non 200 responses.
You can set the timeout in the client, however I would opt for using the hystrix timeout values. That way you can use Archaius to dynamically change the value at runtime if desired.
You can use the Hystrix command annotation or extend the HystrixCommand class. Both options will provide you with your desired behaviour
Here is an example using the annotation.
#HystrixCommand(fallbackMethod = "getRequestFallback")
public String performGetRequest(String uri) {
Client client = Client.create();
WebResource webResource = client.resource(uri);
ClientResponse response = webResource.get(ClientResponse.class);
if (response.getStatus() != 200) {
throw new RuntimeException("Invalid response status");
}
return response.getEntity(String.class);
}
public String getRequestFallback(String uri) {
return "Fallback Value";
}

Handle large number of PUT requests to a rest api

I have been trying to find a way to make this task more efficient. I am consuming a REST based web service and need to update information for over 2500 clients.
I am using fiddler to watch the requests, and I'm also updating a table with an update time when its complete. I'm getting about 1 response per second. Are my expectations to high? I'm not even sure what I would define as 'fast' in this context.
I am handling everything in my controller and have tried running multiple web requests in parallel based on examples around the place but it doesn't seem to make a difference. To be honest I don't understand it well enough and was just trying to get it to build. I suspect it is still waiting for each request to complete before firing again.
I have also increased connections in my web config file as per another suggestion with no success:
<system.net>
<connectionManagement>
<add address="*" maxconnection="20" />
</connectionManagement>
</system.net>
My Controllers action method looks like this:
public async Task<ActionResult> UpdateMattersAsync()
{
//Only get matters we haven't synced yet
List<MatterClientRepair> repairList = Data.Get.AllUnsyncedMatterClientRepairs(true);
//Take the next 500
List<MatterClientRepair> subRepairList = repairList.Take(500).ToList();
FinalisedMatterViewModel vm = new FinalisedMatterViewModel();
using (ApplicationDbContext db = new ApplicationDbContext())
{
int jobCount = 0;
foreach (var job in subRepairList)
{
// If not yet synced - it shouldn't ever be!!
if (!job.Synced)
{
jobCount++;
// set up some Authentication fields
var oauth = new OAuth.Manager();
oauth["access_token"] = Session["AccessToken"].ToString();
string uri = "https://app.com/api/v2/matters/" + job.Matter;
// prepare the json object for the body
MatterClientJob jsonBody = new MatterClientJob();
jsonBody.matter = new MatterForUpload();
jsonBody.matter.client_id = job.NewClient;
string jsonString = jsonBody.ToJSON();
// Send it off. It returns the whole object we updated - we don't actually do anything with it
Matter result = await oauth.Update<Matter>(uri, oauth["access_token"], "PUT", jsonString);
// update our entities
var updateJob = db.MatterClientRepairs.Find(job.ID);
updateJob.Synced = true;
updateJob.Update_Time = DateTime.Now;
db.Entry(updateJob).State = System.Data.Entity.EntityState.Modified;
if (jobCount % 50 == 0)
{
// save every 50 changes
db.SaveChanges();
}
}
}
// if there are remaining files to save
if (jobCount % 50 != 0)
{
db.SaveChanges();
}
return View("FinalisedMatters", Data.Get.AllMatterClientRepairs());
}
}
And of course the Update method itself which handles the Web requesting:
public async Task<T> Update<T>(string uri, string token, string method, string json)
{
var authzHeader = GenerateAuthzHeader(uri, method);
// prepare the token request
var request = (HttpWebRequest)WebRequest.Create(uri);
request.Headers.Add("Authorization", authzHeader);
request.Method = method;
request.ContentType = "application/json";
request.Accept = "application/json, text/javascript";
byte[] bytes = System.Text.Encoding.ASCII.GetBytes(json);
request.ContentLength = bytes.Length;
System.IO.Stream os = request.GetRequestStream();
os.Write(bytes, 0, bytes.Length);
os.Close();
WebResponse response = await request.GetResponseAsync();
using (var reader = new System.IO.StreamReader(response.GetResponseStream()))
{
return JsonConvert.DeserializeObject<T>(reader.ReadToEnd());
}
}
If it's not possible to do more than 1 request per second then I'm interested in looking at an Ajax solution so I can give the user some feedback while it is processing. In my current solution I cannot give the user feedback while the action method hasn't reached 'return' yet can I?
Okay it's taken me a few days (and a LOT of trial and error) but I've worked this out. Hopefully it can help others. I finally found my silver bullet. And it was probably the place I should have started:
MSDN: Consuming the Task-based Asynchronous Pattern
In the end this following line of code is what brought it all to light.
string [] pages = await Task.WhenAll(from url in urls select DownloadStringAsync(url));
I substituted a few things to make it work for a Put request as follows:
HttpResponseMessage[] results = await Task.WhenAll(from p in toUpload select client.PutAsync(p.uri, p.jsonContent));
'toUpload' is a List of MyClass:
public class MyClass
{
// the URI should be relative to the base pase
// (ie: /api/v2/matters/101)
public string uri { get; set; }
// a string in JSON format, being the body of the PUT request
public StringContent jsonContent { get; set; }
}
The key was to stop trying to put my PutAsync method inside a loop. My new line of code IS still blocking until ALL responses have come back, but that is what I wanted. Also, learning that I could use this LINQ style expression to create a Task List on the fly was immeasurably helpful. I won't post all the code (unless someone wants it) because it's not as nicely refactored as the original and I still need to check whether the response of each item was 200 OK before I record it as successfully saved in my database. So how much faster is it?
Results
I tested a sample of 50 web service calls from my local machine. (There is some saving of records to a SQL Database in Azure at the end).
Original Synchronous Code: 70.73 seconds
Asynchronous Code: 8.89 seconds
That's gone from 1.4146 requests per second down to a mind melting 0.1778 requests per second! (if you average it out)
Conclusion
My journey isn't over. I've just scratched the surface of asynchronous programming and am loving it. I need to now work out how to save only the results that have returned 200 OK. I can deserialize the HttpResponse which returns a JSON object (which has a unique ID I can look up etc.) OR I could use the Task.WhenAny method, and experiment with Interleaving.

SignalR Long Running Process

I have setup a SignalR hub which has the following method:
public void SomeFunction(int SomeID)
{
try
{
Thread.Sleep(600000);
Clients.Caller.sendComplete("Complete");
}
catch (Exception ex)
{
// Exception Handling
}
finally
{
// Some Actions
}
m_Logger.Trace("*****Trying To Exit*****");
}
The issue I am having is that SignalR initiates and defaults to Server Sent Events and then hangs. Even though the function/method exits minutes later (10 minutes) the method is initiated again ( > 3 minutes) even when the sendComplete and hub.stop() methods are initiated/called on the client prior. Should the user stay on the page the initial "/send?" request stays open indefinitely. Any assistance is greatly appreciated.
To avoid blocking the method for so long, you could use a Taskand call the client method asynchronously.
public void SomeFunction(Int32 id)
{
var connectionId = this.Context.ConnectionId;
Task.Delay(600000).ContinueWith(t =>
{
var message = String.Format("The operation has completed. The ID was: {0}.", id);
var context = GlobalHost.ConnectionManager.GetHubContext<SomeHub>();
context.Clients.Client(connectionId).SendComplete(message);
});
}
Hubs are created when request arrives and destroyed after response is sent down the wire, so in the continuation task, you need to create a new context for yourself to be able to work with a client by their connection identifier, since the original hub instance will no longer be around to provide you with the Clients method.
Also note that you can leverage the nicer syntax that uses async and await keywords for describing asynchronous program flow. See examples at The ASP.NET Site's SignalR Hubs API Guide.

What is the way to perform some session cleanup logic regardless of user logout/timeout/browser close?

I have an IIS hosted web application with a C# backend.
When a user logs in, I want to instantiate an instance of HttpClient() for the logged in user to communicate with the back-end over a REST API. Once that client is created, the backend will initialize some user-specific memory which should be cleared once the user has logged out (that is, the HttpClient() object is disposed).
It seems like the right thing to do here is to instantiate that HttpClient() object at log-in, and then have some code that is called when either the user manually logs out or the user session times out or the user closes the browser, and that code will dispose of the HttpClient() manually.
This is surely a well-travelled problem, so there must be an elegant solution to it. How can I dispose of this user-specific HttpClient() when any possible log-out scenario occurs (manual/timeout/browser close)?
Handling the departure of a web user is not trivial, as the HTTP protocol is stateless. The server can never be certain if the user is still there; a HTTP connection that gets closed doesn't mean that user have to have gone away, and the server can think that a connection is still open eventhough the user is no longer there.
Unless you will be using the HttpClient object intensly, so that you expect that keeping it alive would save a lot of resources, you should just dispose it at the end of each REST request, and open a new one for the next request.
A web request normally takes a short time to handle, and most resources used for it is freed when the request is gone. That make most of the objects short lived, and those are the ones that the garbage collector handles most efficiently. Holding on to objects across several requests makes them very long lived, which uses up memory on the server, and make the garbage collector work harder. Unless there is a specific reason to hold on to an object, you shouldn't let it live longer than it takes to handle the request.
What you could do is create a class which performs the user-specific memory functions you want to perform. This class would contain a method which instantiates the HttpClient() object and then performs the user-specific operations(functions). This class would also contain another method which clears the user-specific memory functions i.e. it disposes the HttpClient() object and performs cleanup of any user-specific data.
So, essentially, you code would look like this:
public class HttpHelper
{
public void LoadUserInformation()
{
HttpClient httpClientObj = new HttpClient();
//perform user-specific tasks
//your logic here
//Store the httpClientObj object in session
}
public void DisposeUserInformation()
{
//Fetch the httpClientObj from session
//perform user-specific tasks
//your logic here
httpClient.Dispose();
}
}
Now, in either of the scenarios, whether the session times out or the user logs out, you could call the DisposeUserInformation() method and that would handle both of your scenario's be it session timing out or user logging out.
There is a Session_End() method in global.asax. The global.asax file will be wired to call this method when the session ends. You can call the DisposeUserInformation() method there.
You could also call this method on the logout button click in the controller.
Hope this helps!!!
I really don't recommend storing anything IDisposable in the session. What if in the process of downloading from the Web APi, in another window the user clicks Logout, you disposed of the HttpClient while it's in use. That is a small edge-case, but there can be plenty of edge cases with storing IDisposable in session. Also if you need to scale out to multiple servers, that requires storing Session in something other than in-proc which requires the object to be serializable (which HttpClient is not).
Instead:
[serializable]
public sealed class ApiClient
{
public ApiClient(uri baseAddress)
{
this._BaseAddress = baseAddress;
}
public Uri BaseAddress { get; set; }
public IEnumerable<Person> GetPersons()
{
var address = new Uri(this.BaseAddress, "Employees/Persons");
using (var client = new HttpClient())
{
// something like this
var task = GetStringAsync(address);
await task;
var json = task.Result;
}
}
}
Nice session wrapper:
public static class SessionExtensions
{
public static bool TryGetValue<T>(this HttpSessionStateBase session, out T value)
where T : class
{
var name = typeof(T).FullName;
value = session[name] as T;
var result = value != null;
return result;
}
public static void SetValue<T>(this HttpSessionStateBase session, T value)
{
var name = typeof(T).FullName;
session[name] = value;
}
public static void RemoveValue<T>(this HttpSessionStateBase session)
{
var name = typeof(T).FullName;
session[name] = null;
}
public static bool ValueExists(this HttpSessionStateBase session, Type objectType)
{
var name = objectType.FullName;
var result = session[name] != null;
return result;
}
}
Now you can create the api per client:
Session.SetValue(new ApiClient(new Uri("http://localhost:443")));
Somewhere else you can get persons:
ApiClient client;
if (Session.TryGetValue(out client))
{
client.GetPersons();
}

SignalR recording when a Web Page has closed

I am using MassTransit request and response with SignalR. The web site makes a request to a windows service that creates a file. When the file has been created the windows service will send a response message back to the web site. The web site will open the file and make it available for the users to see. I want to handle the scenario where the user closes the web page before the file is created. In that case I want the created file to be emailed to them.
Regardless of whether the user has closed the web page or not, the message handler for the response message will be run. What I want to be able to do is have some way of knowing within the response message handler that the web page has been closed. This is what I have done already. It doesnt work but it does illustrate my thinking. On the web page I have
$(window).unload(function () {
if (event.clientY < 0) {
// $.connection.hub.stop();
$.connection.exportcreate.setIsDisconnected();
}
});
exportcreate is my Hub name. In setIsDisconnected would I set a property on Caller? Lets say I successfully set a property to indicate that the web page has been closed. How do I find out that value in the response message handler. This is what it does now
protected void BasicResponseHandler(BasicResponse message)
{
string groupName = CorrelationIdGroupName(message.CorrelationId);
GetClients()[groupName].display(message.ExportGuid);
}
private static dynamic GetClients()
{
return AspNetHost.DependencyResolver.Resolve<IConnectionManager>().GetClients<ExportCreateHub>();
}
I am using the message correlation id as a group. Now for me the ExportGuid on the message is very important. That is used to identify the file. So if I am going to email the created file I have to do it within the response handler because I need the ExportGuid value. If I did store a value on Caller in my hub for the web page close, how would I access it in the response handler.
Just in case you need to know. display is defined on the web page as
exportCreate.display = function (guid) {
setTimeout(function () {
top.location.href = 'GetExport.ashx?guid=' + guid;
}, 500);
};
GetExport.ashx opens the file and returns it as a response.
Thank you,
Regards Ben
I think a better bet would be to implement proper connection handling. Specifically, have your hub implementing IDisconnect and IConnected. You would then have a mapping of connectionId to document Guid.
public Task Connect()
{
connectionManager.MapConnectionToUser(Context.ConnectionId, Context.User.Name);
}
public Task Disconnect()
{
var connectionId = Context.ConnectionId;
var docId = connectionManager.LookupDocumentId(connectionId);
if (docId != Guid.Empty)
{
var userName = connectionManager.GetUserFromConnectionId(connectionId);
var user = userRepository.GetUserByUserName(userName);
bus.Publish( new EmailDocumentToUserCommand(docId, user.Email));
}
}
// Call from client
public void GenerateDocument(ClientParameters docParameters)
{
var docId = Guid.NewGuid();
connectionManager.MapDocumentIdToConnection(Context.ConnectionId, docId);
var command = new CreateDocumentCommand(docParameters);
command.Correlationid = docId;
bus.Publish(command);
Caller.creatingDocument(docId);
}
// Acknowledge you got the doc.
// Call this from the display method on the client.
// If this is not called, the disconnect method will handle sending
// by email.
public void Ack(Guid docId)
{
connectionManager.UnmapDocumentFromConnectionId(connectionId, docId);
Caller.sendMessage("ok");
}
Of course this is from the top of my head.

Resources