Problem with IHttpAsyncHandler and ASP.NET "Requests Executing" counter - asp.net

Solved:I found the solution to this. Not sure why it happens but Switching the application pool type to 'classic' from 'integrated' solves the problem. Now the 'Requests Executing' keeps going up, the actual application pool process thread remains low (~31 threads) and the app is very responsive (as it should be).
I'm using .Net 2.0 so maybe there is an issue there - tried to google it but no luck.
See Joe Enzminger's reply for an explanation
Thank you all again.
ps. the code is used for playing pool (billiards) online - windows (free) version here for anyone curious and brave enough to try :)
Hello,
I've implemented an IHttpAsyncHandler that client applications "poll" to wait for server notifications. Notifications are generated by other "activities" on the server and the Async Handler does no work at all.
The execution steps are:
IHttpAsyncHandler.BeginProcessRequest
Create AsyncResult instance and add it to a "registered clients" collection
return the AsyncResult
...other server activity will generate notifications to be sent to registered clients...
AsyncResult.CompleteCall called as a result of the generated notification(s).
IHttpAsyncHandler.EndProcessRequest is called
The notification(s) attached to the AsyncResult are written to the response stream.
The problem:
I've tested this on IIS7 on a VM with Windows Server 2008 SP2 and 1 cpu core. After 12 clients register for notifications (using an HTTP GET on the Async.ashx) the performance is degraded to the point that subsequent clients cannot connect.
When I check the ASP.NET performance counters the "Requests Executing" counter goes up with each client registration and stays at 12 (which appears to be its maximum value - probably a thread pool size per CPU).
I find this very confusing. I though the whole point of async handlers is to free up threads for other connections. It appears that this is not the case so I must be doing something wrong!
Why is ASP.NET consuming a thread while waiting for my AsyncResult to complete? Is this a config issue? Do I need to do something specific to indicate that this is an Async Handler?
Thank you,
Nikos.
Edit: Added code below:
public class AsyncResult : IAsyncResult
{
private AsyncCallback _cb;
private object _state;
private ManualResetEvent _event;
private bool _completed;
private bool _completedsynchronously;
private HttpContext _context;
private byte[] _data;
private int _datalength;
private object _lock = new object();
public AsyncWaitResult(AsyncCallback cb, object state, HttpContext context)
{
_context = context;
_cb = cb;
_state = state;
}
public void Close()
{
if (_event != null)
{
_event.Close();
_event = null;
}
}
public HttpContext Context { get { return _context; } }
public Object AsyncState { get { return _state; } }
public bool CompletedSynchronously { get { return _completedsynchronously; } }
public bool IsCompleted { get { return _completed; } }
public byte[] Data { get { return _data; } }
public int DataLength { get { return _datalength; } }
public WaitHandle AsyncWaitHandle
{
get
{
lock (_lock)
{
if (_event == null)
_event = new ManualResetEvent(_completed);
return _event;
}
}
}
public void CompleteCall(byte[] data, int length, bool completedsynchronously)
{
_data = data;
_datalength = length;
_completedsynchronously = completedsynchronously;
lock (_lock)
{
_completed = true;
if (_event != null)
_event.Set();
}
if (_cb != null)
_cb(this);
}
}
public class Outbound : IHttpAsyncHandler
{
public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object state)
{
AsyncResult asyncresult = new AsyncResult(cb, state, context);
RegisteredClients.Instance.Add(asyncresult);
return asyncresult;
}
public void EndProcessRequest(IAsyncResult ar)
{
AsyncResult result = (AsyncResult)ar;
if (result != null)
{
result.Context.Response.Cache.SetCacheability(HttpCacheability.NoCache);
result.Context.Response.ContentType = "application/octet-stream";
result.Context.Response.AddHeader("Connection", "keep-alive");
if (result.Data != null)
result.Context.Response.OutputStream.Write(result.Data, 0, result.DataLength);
result.Close();
}
}
public void ProcessRequest(HttpContext context){}
public bool IsReusable { get { return true; } }
}

Here is a blog post that explains what you are seeing:
http://blogs.msdn.com/b/tmarq/archive/2007/07/21/asp-net-thread-usage-on-iis-7-0-and-6-0.aspx
and companion post
http://blogs.msdn.com/b/tmarq/archive/2010/04/14/performing-asynchronous-work-or-tasks-in-asp-net-applications.aspx
In integrated pipeline mode, using the default configuration, IIS7 places a limit of 12 concurrent REQUESTS (not threads) per CPU. You can change this by modifying the configuration.
I couldn't let it go. I'm pretty sure this is what you're seeing. Deep diving into the article, I don't really like the change they made because it clearly causes problems like this, but who am I to judge!

Another thing to check. If your client is not an actual browser but rather another application that is making multiple concurrent requests to your server this could cause your issue.
Concurrent Requests and Session State
Access to ASP.NET session state is exclusive per session, which means that if two different users make concurrent requests, access to each separate session is granted concurrently. However, if two concurrent requests are made for the same session (by using the same SessionID value), the first request gets exclusive access to the session information. The second request executes only after the first request is finished. (The second session can also get access if the exclusive lock on the information is freed because the first request exceeds the lock time-out.) If the EnableSessionState value in the # Page directive is set to ReadOnly, a request for the read-only session information does not result in an exclusive lock on the session data. However, read-only requests for session data might still have to wait for a lock set by a read-write request for session data to clear.

Related

Lazy CosmosDB Initialization takes longer when more tasks are waiting for it

Context
We have a service that is dependent on CosmosDB. We created a class, having a lazy container, that will be initialized on startup. In the startup class we do :
CreateDatabaseIfNotExist
CreateContainerIfNotExistsAsync
Problem
The first request to CosmosDB starts the initialization.
When we have multiple threads starting up before the initialization, waiting for this lazy intialization to finish, the intialization takes longer the more threads are waiting for it.
Expected
When multiple threads starting up, the threads that need to have this initialized container, should not impact the initialization duration, since this is in a locked context (lazy)
In the code example below, when changing the amount of threads to 5, the initialization is in a couple of seconds. the higher the count of threads, the higher the duration of the initialization.
code example:
using System;
using System.Diagnostics;
using System.Threading.Tasks;
using Microsoft.Azure.Cosmos;
namespace LazyCosmos.Anon
{
class Program
{
static void Main(string[] args)
{
new Do().Run().GetAwaiter().GetResult();
}
public class Do
{
private Lazy<Container> lazyContainer;
private Container Container => lazyContainer.Value;
public Do()
{
lazyContainer = new Lazy<Container>(() => InitializeContainer().GetAwaiter().GetResult());
}
public async Task Run()
{
try
{
var tasks = new Task[100];
for (int i = 0; i < 100; i++)
{
tasks[i] = Task.Run(() =>
ReadItemAsync<Item>("XXX", "XXX"));
}
await Task.WhenAll(tasks);
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
}
public async Task<T> ReadItemAsync<T>(string id, string partitionKey)
{
var itemResponse = await Container.ReadItemAsync<T>(id, new PartitionKey(partitionKey));
return itemResponse.Resource;
}
private async Task<Container> InitializeContainer()
{
var s = Stopwatch.StartNew();
Console.WriteLine($"Started {s.ElapsedMilliseconds}s");
var configuration = new CosmosDbServiceConfiguration("XXX", null, collectionId: "XXX",
"XXX", 400);
var _cosmosClient = new ColdStorageCosmosClient(new ActorColdStorageConfiguration("XXX", "XXX", "https://XXX.XX", "XXX"));
var database = await _cosmosClient
.CreateDatabaseIfNotExistsAsync(configuration.DatabaseId, configuration.DatabaseThroughput);
Console.WriteLine($"CreateDatabaseIfNotExistsAsync took {s.ElapsedMilliseconds}s");
var containerProperties = new ContainerProperties
{
Id = configuration.ContainerId,
PartitionKeyPath = $"/{configuration.PartitionKey}",
DefaultTimeToLive = configuration.DefaultTimeToLive
};
var db = (Database)database;
var containerIfNotExistsAsync = await db.CreateContainerIfNotExistsAsync(containerProperties, configuration.ContainerThroughput);
s.Stop();
Console.WriteLine($"CreateContainerIfNotExistsAsync took {s.ElapsedMilliseconds}s");
return containerIfNotExistsAsync;
}
}
}
public class CosmosDbServiceConfiguration
{
public CosmosDbServiceConfiguration(string databaseId, int? databaseThroughput, string collectionId, string partitionKey, int? containerThroughput = null)
{
DatabaseId = databaseId;
ContainerId = collectionId;
DatabaseThroughput = databaseThroughput;
ContainerThroughput = containerThroughput;
PartitionKey = partitionKey;
}
public string DatabaseId { get; }
public int? DatabaseThroughput { get; }
public string ContainerId { get; }
public int? ContainerThroughput { get; }
public string PartitionKey { get; }
public int? DefaultTimeToLive { get; set; }
}
public class ColdStorageCosmosClient : CosmosClient
{
public ColdStorageCosmosClient(ActorColdStorageConfiguration actorColdStorageConfiguration) : base(actorColdStorageConfiguration.EndpointUrl, actorColdStorageConfiguration.Key)
{
}
}
public class ActorColdStorageConfiguration
{
public ActorColdStorageConfiguration(string databaseName, string collectionName, string endpointUrl, string key)
{
DatabaseName = databaseName;
CollectionName = collectionName;
EndpointUrl = endpointUrl;
Key = key;
}
public string DatabaseName { get; }
public string CollectionName { get; }
public string EndpointUrl { get; }
public string Key { get; }
}
public class Item
{
public string id { get; set; }
}
}
You're experiencing thread pool exhaustion. There's a few different concepts that are conflicting to cause the exhaustion.
First, even though asynchronous code does not use a thread for the duration of the asynchronous operation, it often does need to very briefly borrow a thread pool thread in order to do housework when the asynchronous operation completes. As a result, most asynchronous code only runs efficiently if there is a free thread pool thread available, and if there are no thread pool threads available, then asynchronous code may be delayed.
Another part of the puzzle is that the thread pool has a limited thread injection rate. This is deliberate, so that the thread pool isn't constantly creating/destroying threads as its load varies. That would be very inefficient. Instead, a thread pool that has all of its threads busy (and still has more work to do) will only add a thread every few seconds.
The final concept to recognize is that Lazy<T> is blocking when using the default LazyThreadSafetyMode.ExecutionAndPublication behavior. The way this Lazy<T> works is that only one thread executes the delegate (() => InitializeContainer().GetAwaiter().GetResult()). All other threads block, waiting for that delegate to complete.
So now, putting it all together:
A large number of work items are placed onto the thread pool work queue (by Task.Run). The thread pool begins executing only as many work items as it has threads.
Each of these work items accesses the Container (i.e., Lazy<Container>.Value), so each one of these work items blocks a thread until the initialization is complete. Only the first work item accessing Container will run the initialization code.
The (asynchronous) initialization code attempts to make progress, but it needs a thread pool thread to be free in order to handle housekeeping when its awaits complete. So it is also queueing very small work items to the thread pool as necessary.
The thread pool has more work than it can handle, so it begins adding threads. Since it has a limited thread injection rate, it will only add a thread every few seconds.
The thread pool is overwhelmed with work, but it can't know which work items are the important ones. Most of its work items will just block on the Lazy<T>, which uses up another thread. The thread pool cannot know which work items are the ones from the asynchronous initialization code that will free up the other work items (and threads). So most of the threads added by the thread pool just end up blocking on other work that is having a hard time to complete since there are no thread pool threads available.
So, let's talk solutions.
IMO, the easiest solution is to remove (most of) the blocking. Allow the initialization to be asynchronous by changing the lazy type from Lazy<Container> to Lazy<Task<Container>>. The Lazy<Task<T>> pattern is "asynchronous lazy initialization", and it works by Lazy-initializing a task.
The Lazy<T> part of Lazy<Task<T>> ensures that only the first caller begins executing the asynchronous initialization code. As soon as that asynchronous code yields at an await (and thus returns a Task), the Lazy<T> part is done. So the blocking of other threads is very brief.
Then all the work items get the same Task<T>, and they can all await it. A single Task<T> can be safely awaited any number of times. Once the asynchronous initialization code is complete, the Task<T> gets a result, and all the awaiting work items can continue executing. Any future calls to the Lazy<Task<T>>.Value will immediately get a completed Task<T> which takes no time at all to await since it is already completed.
Once you wrap your head around Lazy<Task<T>>, it's pretty straightforward to use. The only awkward part is that the code for the work items now have to await the shared asynchronous initialization:
public class Do
{
private Lazy<Task<Container>> lazyContainer;
private Task<Container> ContainerTask => lazyContainer.Value;
public Do()
{
lazyContainer = new Lazy<Task<Container>>(InitializeContainer);
}
public async Task<T> ReadItemAsync<T>(string id, string partitionKey)
{
// This is the awkward part. Until you get used to it. :)
var container = await ContainerTask;
var itemResponse = await container.ReadItemAsync<T>(id, new PartitionKey(partitionKey));
return itemResponse.Resource;
}
// other methods are unchanged.
}
I have an AsyncLazy<T> type in my AsyncEx library, which is essentially the same as Lazy<Task<T>> with a few usability enhancements.
More information on this pattern:
Asynchronous lazy initialization blog post.
Recipe 14.1 "Initializing Shared Resources" in my book Concurrency in C# Cookbook, 2nd edition.
The Lazy<Task<T>> asynchronous lazy initialization pattern works great if you have a widely shared resource that may or may not need to be initialized. If you have a local resource (like a private member as in this example), and if you know you will always want it initialized, then you can make the code simpler by just using Task<T> instead of Lazy<Task<T>>:
public class Do
{
private Task<Container> ContainerTask;
public Do()
{
// Important semantic change:
// This begins initialization *immediately*.
// It does not wait for work items to request the container.
ContainerTask = InitializeContainer();
}
public async Task<T> ReadItemAsync<T>(string id, string partitionKey)
{
var container = await ContainerTask;
var itemResponse = await container.ReadItemAsync<T>(id, new PartitionKey(partitionKey));
return itemResponse.Resource;
}
// other methods are unchanged.
}

TelemetryProcessor - Multiple instances overwrite Custom Properties

I have a very basic http-POST triggered api which creates a TelemetryClient. I needed to provide a custom property in this telemetry for each individual request, so I implemented a TelemtryProcessor.
However, when subsequent POST requests are handled and a new TelemetryClient is created that seems to interfere with the first request. I end up seeing maybe a dozen or so entries in App Insights containing the first customPropertyId, and close to 500 for the second, when in reality the number should be split evenly. It seems as though the creation of the 2nd TelemetryClient somehow interferes with the first.
Basic code is below, if anyone has any insight (no pun intended) as to why this might occur, I would greatly appreciate it.
ApiController which handles the POST request:
public class TestApiController : ApiController
{
public HttpResponseMessage Post([FromBody]RequestInput request)
{
try
{
Task.Run(() => ProcessRequest(request));
return Request.CreateResponse(HttpStatusCode.OK);
}
catch (Exception)
{
return Request.CreateErrorResponse(HttpStatusCode.InternalServerError, Constants.GenericErrorMessage);
}
}
private async void ProcessRequest(RequestInput request)
{
string customPropertyId = request.customPropertyId;
//trace handler creates the TelemetryClient for custom property
CustomTelemetryProcessor handler = new CustomTelemetryProcessor(customPropertyId);
//etc.....
}
}
CustomTelemetryProcessor which creates the TelemetryClient:
public class CustomTelemetryProcessor
{
private readonly string _customPropertyId;
private readonly TelemetryClient _telemetryClient;
public CustomTelemetryProcessor(string customPropertyId)
{
_customPropertyId = customPropertyId;
var builder = TelemetryConfiguration.Active.TelemetryProcessorChainBuilder;
builder.Use((next) => new TelemetryProcessor(next, _customPropertyId));
builder.Build();
_telemetryClient = new TelemetryClient();
}
}
TelemetryProcessor:
public class TelemetryProcessor : ITelemetryProcessor
{
private string CustomPropertyId { get; }
private ITelemetryProcessor Next { get; set; }
// Link processors to each other in a chain.
public TelemetryProcessor(ITelemetryProcessor next, string customPropertyId)
{
CustomPropertyId = customPropertyId;
Next = next;
}
public void Process(ITelemetry item)
{
if (!item.Context.Properties.ContainsKey("CustomPropertyId"))
{
item.Context.Properties.Add("CustomPropertyId", CustomPropertyId);
}
else
{
item.Context.Properties["CustomPropertyId"] = CustomPropertyId;
}
Next.Process(item);
}
}
It's better to avoid creating Telemetry Client per each request, isntead re-use single static Telemetry Client instance. Telemetry Processors and/or Telemetry Initializers should also typically be registered only once for the telemetry pipeline and not for every request. TelemetryConfiguration.Active is static and by adding new Processor with each request the queue of processor only grows.
The appropriate setup would be to add Telemetry Initializer (Telemetry Processors are typically used for filtering and Initializers for data enrichment) once into the telemetry pipeline, e.g. though adding an entry to ApplicationInsights.config file (if present) or via code on TelemetryConfiguration.Active somewhere in global.asax, e.g. Application_Start:
TelemetryConfiguration.Active.TelemetryInitializers.Add(new MyTelemetryInitializer());
Initializers are executed in the same context/thread where Track..(..) was called / telemetry was created, so they will have access to the thread local storage and or local objects to read parameters/values from.

synchronously invoke client side method with SignalR

SignalR does not have the ability to have client methods which returns a value. So I am trying to create a helper class to make this possible.
So this is what I am trying to do:
Server side: Call client method and provide unique request id Client(clientId).GetValue(requestId)
Server side: Save requestId and wait for answer using ManualResetEvent
Client side: Inside void GetValue(Guid requestId) call server method hubProxy.Invoke("GetValueFinished", requestId, 10)
Server side: find waiting method by requestId => set return value => set signal
Server side: Method not longer waiting vor ManualResetEvent and returns retrieved value.
I am able to get it work unfortunately. Here is my code:
public static class MethodHandler
{
private static ConcurrentDictionary<Guid, ReturnWaiter> runningMethodWaiters = new ConcurrentDictionary<Guid,ReturnWaiter>();
public static TResult GetValue<TResult>(Action<Guid> requestValue)
{
Guid key = Guid.NewGuid();
ReturnWaiter returnWaiter = new ReturnWaiter(key);
runningMethodWaiters.TryAdd(key, returnWaiter);
requestValue.Invoke(key);
returnWaiter.Signal.WaitOne();
return (TResult)returnWaiter.Value;
}
public static void GetValueResult(Guid key, object value)
{
ReturnWaiter waiter;
if (runningMethodWaiters.TryRemove(key, out waiter))
{
waiter.Value = value;
}
}
}
internal class ReturnWaiter
{
private ManualResetEvent _signal = new ManualResetEvent(false);
public ManualResetEvent Signal { get { return _signal; } }
public Guid Key {get; private set;}
public ReturnWaiter(Guid key)
{
Key = key;
}
private object _value;
public object Value
{
get { return _value; }
set
{
_value = value;
Signal.Set();
}
}
}
Using this MethodHandler class I need to have two method server side:
public int GetValue(string clientId)
{
return MethodHandler.GetValue<int>(key => Clients(clientId).Client.GetValue(key));
}
public void GetValueResult(Guid key, object value)
{
MethodHandler.GetValueResult(key, value);
}
Client side implementation is like this:
// Method registration
_hubProxy.On("GetValue", new Action<Guid>(GetValue));
public void GetValue(Guid requestId)
{
int result = 10;
_hubConnection.Invoke("GetValueResult", requestId, result);
}
PROBLEM:
if I call server side GetValue("clientid"). The client method will not be invoked. If I comment out returnWaiter.Signal.WaitOne();, client side GetValue is called and server side GetValueResult is called. But of course this time the method has already returned.
I thought is has to do with the ManualResetEvent but even using while(!returnWaiter.HasValue) Thread.Sleep(100); will not fix this issue.
Any ideas how to fix this issue?
Thanks in advance!
First, I think that, rather than asking for help in how to make it synchronous, it would be best if you just told us what it is you're trying to do so we could suggest a proper approach to do it.
You don't show your MethodHandler::Retrieve method, but I can guess pretty much what it looks like and it's not even the real problem. I have to tell you in the nicest possible way that this is a really bad idea. It will simply never scale. This would only work with a single SignalR server instance because you're relying on machine specific resources (e.g. kernel objects behind the ManualResetEvent) to provide the blocking. Maybe you don't need to scale beyond one server to meet your requirements, but this still a terrible waste of resources even on a single server.
You're actually on the right track with the client calling back with the requestId as a correlating identifier. Why can't you use that correlation to resume logical execution of whatever process you are in the middle of on the server side? That way no resources are held around while waiting for the message to be delivered to the client, processed and then the follow up message, GetValueResult in your sample, to be sent back a the server instance.
Problem solved:
The problem only occured in Hub.OnConnected and Hub.OnDisconnected. I don't have an exact explanation why, but probably these methods must be able to finish before it will handle your method call to the client.
So I changed code:
public override Task OnConnected()
{
// NOT WORKING
Debug.Print(MethodHandler.GetValue<int>(key => Clients(Context.ConnectionId).Client.GetValue(key)));
// WORKING
new Thread(() => Debug.Print(MethodHandler.GetValue<int>(key => Clients(Context.ConnectionId).Client.GetValue(key)))).Start();
return base.OnConnected();
}

Synchronous responses to `Gdx.net.sendHttpRequest` in LibGDX

I'm making a small game in LibGDX. I'm saving the player's username locally as well as on a server. The problem is that the application is not waiting for the result of the call so the online database's ID is not saved locally. Here's the overall flow of the code:
//Create a new user object
User user = new User(name);
//Store the user in the online database
NetworkService networkService = new NetworkService();
String id = networkService.saveUser(user);
//Set the newly generated dbase ID on the local object
user.setId(id);
//Store the user locally
game.getUserService().persist(user);
in this code, the id variable is not getting set because the saveUser function is returning immediately. How can I make the application wait for the result of the network request so I can work with results from the server communication?
This is the code for saveUser:
public String saveUser(User user) {
Map<String, String> parameters = new HashMap<String, String>();
parameters.put("action", "save_user");
parameters.put("json", user.toJSON());
HttpRequest httpGet = new HttpRequest(HttpMethods.POST);
httpGet.setUrl("http://localhost:8080/provisioner");
httpGet.setContent(HttpParametersUtils.convertHttpParameters(parameters));
WerewolfsResponseListener responseListener = new WerewolfsResponseListener();
Gdx.net.sendHttpRequest (httpGet, responseListener);
return responseListener.getLastResponse();
}
This is the WerewolfsResponseListener class:
class WerewolfsResponseListener implements HttpResponseListener {
private String lastResponse = "";
public void handleHttpResponse(HttpResponse httpResponse) {
System.out.println(httpResponse.getResultAsString());
this.lastResponse = httpResponse.getResultAsString();
}
public void failed(Throwable t) {
System.out.println("Saving user failed: "+t.getMessage());
this.lastResponse = null;
}
public String getLastResponse() {
return lastResponse;
}
}
The asynchrony you are seeing is from Gdx.net.sendHttpRequest. The methods on the second parameter (your WerewolfsResponseListener) will be invoked whenever the request comes back. The success/failure methods will not be invoked "inline".
There are two basic approaches for dealing with callbacks structured like this: "polling" or "events".
With polling, your main game loop could "check" the responseListener to see if its succeeded or failed. (You would need to modify your current listener a bit to disambiguate the success case and the empty string.) Once you see a valid response, you can then do the user.setId() and such.
With "events" then you can just put the user.setId() call inside the responseListener callback, so it will be executed whenever the network responds. This is a bit more of a natural fit to the Libgdx net API. (It does mean your response listener will need a reference to the user object.)
It is not possible to "wait" inline for the network call to return. The Libgdx network API (correctly) assumes you do not want to block indefinitely in your render thread, so its not structured for that (the listener will be queued up as a Runnable, so the earliest it can run is on the next render call).
I would not recommend this to any human being, but if you need to test something in a quick and dirty fashion and absolutely must block, this will work. There's no timeout, so again, be prepared for absolute filth:
long wait = 10;
while(!listener.isDone())
{
Gdx.app.log("Net", "Waiting for response");
try
{
Thread.sleep(wait *= 2);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
public static class BlockingResponseListener implements HttpResponseListener
{
private String data;
private boolean done = false;
private boolean succeeded = false;
#Override
public void handleHttpResponse(HttpResponse httpResponse)
{
Gdx.app.log("Net", "response code was "+httpResponse.getStatus().getStatusCode());
data = httpResponse.getResultAsString();
succeeded = true;
done = true;
}
#Override
public void failed(Throwable t)
{
done = true;
succeeded = false;
Gdx.app.log("Net", "Failed due to exception ["+t.getMessage()+"]");
}
public boolean succeeded()
{
return succeeded;
}
public boolean isDone()
{
return done;
}
public String getData()
{
return data;
}
}

nhibernate session manager implementation

I am new to Nhibernate and slowing working my way thru learning it. I tried to implement a session manager class to help me get the session for my db calls. Below is the code for it. Can someone please say if this is architecturally correct and foresee any issue of scalability or performance?
public static class StaticSessionManager
{
private static ISession _session;
public static ISession GetCurrentSession()
{
if (_session == null)
OpenSession();
return _session;
}
private static void OpenSession()
{
_session = (new Configuration()).Configure().BuildSessionFactory().OpenSession();
}
public static void CloseSession()
{
if (_session != null)
{
_session.Close();
_session = null;
}
}
}
and in my data provider class, I use the following code to get data.
public class GenericDataProvider<T>
{
NHibernate.ISession _session;
public GenericDataProvider()
{
this._session = StaticSessionManager.GetCurrentSession();
}
public T GetById(object id)
{
using (ITransaction tx = _session.BeginTransaction())
{
try
{
T obj = _session.Get<T>(id);
tx.Commit();
return obj;
}
catch (Exception ex)
{
tx.Rollback();
StaticSessionManager.CloseSession();
throw ex;
}
}
}
}
and then
public class UserDataProvider : GenericDataProvider<User>
{
public User GetUserById(Guid uid)
{
return GetById(uid)
}
}
Final usage in Page
UserDataProvider udp = new UserDataProvider();
User u = udp.GetUserById(xxxxxx-xxx-xxx);
Is this something that is correct? Will instantiating lot of data providers in a single page cause issues?
I am also facing an issue right now, where if I do a same read operation from multiple machines at the same time, Nhibernate throws random errors- which I think is due to transactions.
Please advice.
From what I can see you are building the session factory if you have a null session. You should only call BuildSessionFactory() once when the application starts.
Where you do this is up to you, some people build the SessionFactory inside Global.asax in the method application_start or in your case have a static property for sessionFactory instead of session in your StaticSessionManager class.
I suspect your errors are due to the fact that your session factory is being built multiple times!
Another point is that some people open a transaction _session.BeginTransaction() at the beginning of each request and either commit or rollback at the end of each request. This gives you a unit of work which means you can lose the
using (ITransaction tx = _session.BeginTransaction())
{
...
}
on every method. All of this is open for debate but I use this method for 99% of all my code with no trouble at all.

Resources