Currently in our ASP.NET app we have 1 session per Request, and create one transaction every time we load or update and object. See below:
public static T FindById<T>(object id)
{
ISession session = NHibernateHelper.GetCurrentSession();
ITransaction tx = session.BeginTransaction();
try
{
obj = session.Get<T>(id);
tx.Commit();
}
catch
{
session.Close();
throw;
}
finally
{
tx.Dispose();
}
return obj;
}
public virtual void Save()
{
ISession session = NHibernateHelper.GetCurrentSession();
ITransaction transaction = session.BeginTransaction();
try
{
if (!IsPersisted)
{
session.Save(this);
}
else
{
session.SaveOrUpdateCopy(this);
}
transaction.Commit();
}
catch (HibernateException)
{
if (transaction != null)
{
transaction.Rollback();
}
if (session.IsOpen)
{
session.Close();
}
throw;
}
finally
{
transaction.Dispose();
}
}
Obviously this isn't ideal as it means you create a new connection to the database every time you load or save an object, which incurs performance overhead.
Questions:
If an entity is already loaded in the
1st level cache will the
GetTransaction() call open a database
connection? I suspect it will...
Is there a better way to handle our transaction management so
there are less transactions and therefore
less database connections?
Unfortunately the app code is probably too mature to structure everything like so (with the get and update all in the same transaction):
using(var session = sessionFactory.OpenSession())
using(var tx = session.BeginTransaction())
{
var post = session.Get<Post>(1);
// do something with post
tx.Commit();
}
Would it be a terrible idea to create one transaction per Request and commit it at the end of the request? I guess the downside is that it ties up one database connection while non-database operations take place.
One transaction Per Request is concidered as best practice with NHibernate. This pattern is implemented in Sharp Architecture.
But in Nhibernate method BeginTransaction() doest open connection to DB. Connection is opened at first real sql request and closed just after query is executed. So Nhibernate holds open connection for some seconds to perform query. You can verify it by SQL Profiler.
Additionally NHiberante always try to use Sql Servers connection pool and that why opening your connection may be not so expensive.
Would it be a terrible idea to create one transaction per Request and commit it at the end of the request
It wouldn't be terrible but I think it's a poor practice. If there is an error and the transaction is rolled back, I would much rather handle it on the page then at the end of the request. I prefer to use one session per request with as many transactions as I need during the request (typically one).
NHibernate is very conscientious about managing its database connections, you don't need to worry about it in most cases.
I don't like your transaction logic, especially since you kill the session if the transaction fails. And I'm not sure why you're calling SaveOrUpdateCopy. NHibernate will detect if the object needs to be persisted so the IsPersisted check is probably not needed. I use this pattern:
using (var txn = session.BeginTransaction())
{
try
{
session.SaveOrUpdate(this);
txn.Commit();
}
catch (Exception ex)
{
txn.Rollback();
// log
// handle, wrap, or throw
}
}
Related
I'm running into a problem sending massive requests to a .NET Core web service. I'm using a SemaphoreSlim to limit the number of simultaneous requests. When I get a 10061 error (the web service has refused the connection), I want to dial back the number of simultaneous requests. My idea at the moment is to de-reference the SemaphoreSlim and create another:
await this.semaphoreSlim.WaitAsync().ConfigureAwait(false);
counter++;
Uri uri = new Uri($"{api}/{keyProperty}", UriKind.Relative);
string rowVersion = string.Empty;
try
{
HttpResponseMessage getResponse = await this.httpClient.GetAsync(uri).ConfigureAwait(false);
if (getResponse.IsSuccessStatusCode)
{
using (HttpContent httpContent = getResponse.Content)
{
JObject currentObject = JObject.Parse(await httpContent.ReadAsStringAsync().ConfigureAwait(false));
rowVersion = currentObject.Value<string>("rowVersion");
}
}
}
catch (HttpRequestException httpRequestException)
{
SocketException socketException = httpRequestException.InnerException as SocketException;
if (socketException != null && socketException.ErrorCode == PutHandler.ConnectionRefused)
{
this.semaphoreSlim = new SemaphoreSlim(counter * 90 / 100, counter * 90 / 100);
}
}
}
finally
{
this.semaphoreSlim.Release();
}
If I do this, what will happen to the other tasks that are waiting on the Semaphore that I just de-referenced? My guess is that nothing will happen until the object is garbage collected and disposed.
A SemaphoreSlim (just like any other object in .NET) will exist as long as there are references to it.
However, there is a bug in your code: the SemaphoreSlim being released is this.semaphoreSlim, and if this.semaphoreSlim is changed between being acquired and being released, then the code will release a different semaphore than the one that was acquired. To avoid this problem, copy this.semaphoreSlim into a local variable at the beginning of your method, and acquire and release that local variable.
More broadly, there's a difficult in the attempted solution. If you start 1000 tasks, they will all reference the old semaphore and ignore the updated this.sempahoreSlim. So you'd need a separate solution. For example, you could define a disposable "token" which is permission to call the API. Then have an asynchronous collection of these tokens (e.g., a Channel). This gives you full control over how many tokens are released at once.
When I execute the following code
public async Task<ObservableCollection<CommentModel>> GetTypeWiseComment(int refId, int commentType)
{
try
{
var conn = _dbOperations.GetSyncConnection(DbConnectionType.UserDbConnetion);
var sqlCommand = new SQLiteCommand(conn)
{
CommandText = "bit complex sqlite query"
};
List<CommentModel> commentList = null;
Task commentListTask =
Task.Factory.StartNew(() => commentList = sqlCommand.ExecuteQuery<CommentModel>().ToList());
await commentListTask;
var commentsList = new ObservableCollection<CommentModel>(commentList);
return commentsList;
}
catch (Exception)
{
throw;
}
finally
{
GC.Collect();
}
}
Sometimes I get the following exception
Message: database is locked
InnerException: N/A
StackTrace: at SQLite.SQLite3.Prepare2(IntPtr db, String query)
at SQLite.SQLiteCommand.Prepare()
at SQLite.SQLiteCommand.<ExecuteDeferredQuery>d__12<com.IronOne.BoardPACWinAppBO.Meeting.MeetingModel>.MoveNext()
at System.Collections.Generic.List<System.Diagnostics.Tracing.FieldMetadata>..ctor(Collections.Generic.IEnumerable<System.Diagnostics.Tracing.FieldMetadata> collection)
at BoardPACWinApp!<BaseAddress>+0xaa36ca
at com.IronOne.BoardPACWinAppDAO.Comments.CommentsDAO.<>c__DisplayClass4_0.<GetCommentTypeWiseComment>b__0()
at SharedLibrary!<BaseAddress>+0x38ec7b
at SharedLibrary!<BaseAddress>+0x4978cc
Can anyone point out what's wrong with my code?
There is another sync process going on the background and sometimes it has a bulk of records which may take more than 10 seconds to execute. If this above code happens to execute at the same time as the sync writes to the DB, it might block the reads, right?
If so how do I read from SQLite while another process writes to the DB?
Thank you.
as #Mark Benningfield mentioned enabling WAL mode almost solved my problem. However, there was another issue that creates a lot of SQLite connections on my app so I solved that by creating a Singleton module which handles database connections.
Please comment and ask if you require more information if you encounter a similar issue. Thanks.
I have setup a SignalR hub which has the following method:
public void SomeFunction(int SomeID)
{
try
{
Thread.Sleep(600000);
Clients.Caller.sendComplete("Complete");
}
catch (Exception ex)
{
// Exception Handling
}
finally
{
// Some Actions
}
m_Logger.Trace("*****Trying To Exit*****");
}
The issue I am having is that SignalR initiates and defaults to Server Sent Events and then hangs. Even though the function/method exits minutes later (10 minutes) the method is initiated again ( > 3 minutes) even when the sendComplete and hub.stop() methods are initiated/called on the client prior. Should the user stay on the page the initial "/send?" request stays open indefinitely. Any assistance is greatly appreciated.
To avoid blocking the method for so long, you could use a Taskand call the client method asynchronously.
public void SomeFunction(Int32 id)
{
var connectionId = this.Context.ConnectionId;
Task.Delay(600000).ContinueWith(t =>
{
var message = String.Format("The operation has completed. The ID was: {0}.", id);
var context = GlobalHost.ConnectionManager.GetHubContext<SomeHub>();
context.Clients.Client(connectionId).SendComplete(message);
});
}
Hubs are created when request arrives and destroyed after response is sent down the wire, so in the continuation task, you need to create a new context for yourself to be able to work with a client by their connection identifier, since the original hub instance will no longer be around to provide you with the Clients method.
Also note that you can leverage the nicer syntax that uses async and await keywords for describing asynchronous program flow. See examples at The ASP.NET Site's SignalR Hubs API Guide.
I have a scenario where i have a page which opens a dialog on click of a button, in the opened dialog form on button click i can read a list of data from a selected .txt file and build a query and add the data to some database tables. Since there could be large amount of data this process can take large time because of this the user would not be able to work on the application until the upload completes. Hence to make the upload process Asynk i am using the PageAsyncTask. Below is the code sample, but in the method called in the PageAsyncTask the HttpContext.Current is null hence i am not able to use session handling. Please any guidance on this why would this be null and how can i use the session in this case
protected void BtnUpload_click(object sender, EventArgs e)
{
PageAsyncTask asyncTask1 = new PageAsyncTask(OnBegin, OnEnd, OnTimeout, SessionManager.UserData, true);
Page.RegisterAsyncTask(asyncTask1);
Page.ExecuteRegisteredAsyncTasks();
}
public IAsyncResult OnBegin(object sender, EventArgs e,
AsyncCallback cb, object extraData)
{
_taskprogress = "AsyncTask started at: " + DateTime.Now + ". ";
uData = extraData as UserData;
_dlgt = new AsyncTaskDelegate(BeginInvokeUpload);
IAsyncResult result = _dlgt.BeginInvoke(cb, extraData);
return result;
}
private void BeginInvokeUpload()
{
string selectedFileName = string.Empty;
string returnValuePage = string.Empty;
User teller = new User();
SessionManager.UserData = uData;
}
private void BeginInvokeUpload()
{
string selectedFileName = string.Empty;
string returnValuePage = string.Empty;
User teller = new User();
SessionManager.UserData = uData;
}
public class SessionManager
{
public static UserData UserData
{
get
{
UserData userData = null;
if (HttpContext.Current.Session["UserData"] != null)
{
userData = HttpContext.Current.Session["UserData"] as UserData;
}
return userData;
}
set
{
HttpContext.Current.Session["UserData"]=value;
}
}
}
The answer is simple : you can not use the session if the HttpContext.Current is null
So if you need to modify the session you simple can not and the only alternative is to make your totally custom session module/solution.
If you only need to read some values, then you can pass them when you create your thread.
And finally the only solution is to not use the thread if you won to manipulate the session variables.
why this design?
why MS session did not allow you to handle it out side of a page and inside a thread ? the answer is because is need to lock the session data on page processing - with this lock even if you start a thread and been able to get the session data, will not been able to use it parallel.
Also if you been able to use the session your self in a thread, then this thread may lock the entire page view process, because I say it again, session is lock the entire page view, and each page that use the same session are not work in parallel
This lock of session on the entire page is necessary the way the MS session works, and the only way to avoid that is to make a totally custom session solution, and handle special cases with different code.
The good about that design is that you avoid to make a lot of locking and synchronization by your self on every page call - for example if you disable the session on a page, and use that page for data inserting, if a use make multiple double clicks on the insert, and you do not handle it with synchronization on the insert, you end up with multiple same insertions.
More about session lock:
Replacing ASP.Net's session entirely
Web app blocked while processing another web app on sharing same session
jQuery Ajax calls to web service seem to be synchronous
ASP.NET Server does not process pages asynchronously
Similar question:
How to get Session Data with out having HttpContext.Current, by SessionID
I am writing an integration webservice which will consume various webservices from a couple different backend systems. I want to be able to parallelize non-dependent service calls and be able to cancel requests that take too long (since I have an SLA to meet).
to aid in parallel backend calls, I am using the ASYNC client apis (generated by wsimport using the client-side jax-ws binding alteration files)
the issue I am having is that when I try to cancel a request, the Response<> appropriately marks the request as canceled, however the actual request is not really canceled. apparently some part of the JAX-WS runtime actually submits a com.sun.xml.ws.api.pipe.Fiber to the run queue which is what actually does the request. the cancel on the Result<> does not prevent these PIPEs from running on the queue and making the request.
has anyone run into this issue or a similar issue before?
My code looks like this:
List<Response<QuerySubscriberResponse>> resps = new ArrayList<Response<QuerySubscriberResponse>>();
for (int i = 0; i < 10; i++) {
resps.add(FPPort.querySubscriberAsync(req));
}
for (int i = 0; i < 10; i++) {
logger.info("Waiting for " + i);
try {
QuerySubscriberResponse re = resps.get(i).get(1,
TimeUnit.SECONDS); // execution time for this request is 15 seconds, so we should always get a TimeoutException
logger.info("Got: "
+ new Marshaller().marshalDocumentToString(re));
} catch (TimeoutException e) {
logger.error(e);
logger.error("Cancelled: " + resps.get(i).cancel(true));
try {
logger.info("Waiting for my timed out thing to finish -- technically I've canceled it");
QuerySubscriberResponse re = resps.get(i).get(); // this causes a CancelledExceptio as we would expect
logger.info("Finished waiting for the canceled req");
} catch (Exception e1) {
e1.printStackTrace();
}
} catch (Exception e) {
logger.error(e);
} finally {
logger.info("");
logger.info("");
}
}
I would expect that all of these requests would end up being cancelled, however in reality they all continue to execute and only return when the backend finally decides to send us a response.
as it turns out this was indeed a bug in the jax-ws implementation. Oracle has issued a Patch (RHEL) against wls 10.3.3 to address this issue.