What Exactly Does HttpApplicationState.Lock Do? - asp.net

My application stores two related bits of data in application state. Each time I read these two values, I may (depending on their values) need to update both of them.
So to prevent updating them while another thread is in the middle of reading them, I'm locking application state.
But the documentation for HttpApplicationState.Lock Method really doesn't tell me exactly what it does.
For example:
How does it lock? Does it block any other thread from writing the data?
Does it also block read access? If not, then this exercise is pointless because the two values could be updated after another thread has read the first value but before it has read the second.
In addition to preventing multiple threads from writing the data at the same time, it is helpful to also prevent a thread from reading while another thread is writing; otherwise, the first thread could think it needs to refresh the data when it's not necessary. I want to limit the number of times I perform the refresh.

Looking at the code is locking only the write, not the read.
public void Lock()
{
this._lock.AcquireWrite();
}
public void UnLock()
{
this._lock.ReleaseWrite();
}
public object this[string name]
{
get
{
return this.Get(name);
}
set
{
// here is the effect on the lock
this.Set(name, value);
}
}
public void Set(string name, object value)
{
this._lock.AcquireWrite();
try
{
base.BaseSet(name, value);
}
finally
{
this._lock.ReleaseWrite();
}
}
public object Get(string name)
{
object obj2 = null;
this._lock.AcquireRead();
try
{
obj2 = base.BaseGet(name);
}
finally
{
this._lock.ReleaseRead();
}
return obj2;
}
The write and the read is thread safe, meaning have all ready the lock mechanism. So if you going on a loop that you read data, you can lock it outside to prevent other break the list.
Its also good to read this answer: Using static variables instead of Application state in ASP.NET
Its better to avoid use the Application to store data, and direct use a static member with your lock mechanism, because first of all MS suggest it, and second because the read/write to application static data is call the locking on every access of the data.

Related

Singleton State vs. Singleton Event

I have a Blazor Server App. This app is connected to a SQL DB and is at this time relatively complex. Since the main focus is usability, we ran into some problems when we access the database directly (components not updated correctly, etc.).
Therefore, I am trying to create a StateService which basically acts as some sort of "Cache". Data is stored in it and components can access it, without any loading times. During my research, I had some questions, which the documentation couldn't answer to me.
The Problem
It should be possible that all components always have the latest state of the data. This means that clients need to be automatically notified of any changes and automatically refresh their states. It also should be possible to have the power to provide the service to ~1.000 concurrent users, without the necessity to upgrade to a high-end server (I know, that this is very vague).
Possible Solutions
Singleton State
I basically have a service, which holds the data as a property in it and has an OnChange-event. Whenever any data property gets set, the event gets triggered. This service is then used by components to display data. When I add data to the database, the data will then be automatically loaded back into the state. I added this service as a singleton, so there is only one object during the server runtime.
public class SharedStateService
{
public event Action OnChange;
private ICollection<MyData>? myData;
public ICollection<MyData>? MyData
{
get => this.myData;
set
{
this.myData= value;
this.OnChange?.Invoke();
}
}
}
public class MyDataService
{
private readonly SharedStateService sharedStateService;
private readonly TestDbContext context;
public MyDataService(TestDbContext context, SharedStateService sharedService)
{
this.context = context;
this.sharedStateService = sharedService;
}
public async Task<bool> DeleteData(MyData data)
{
try
{
this.context.Set<MyData>().Remove(data);
await this.context.SaveChangesAsync();
}
catch (Exception)
{
return false;
}
await this.ReloadData();
return true;
}
public async Task ReloadData()
{
this.sharedStateService.MyData =
await this.context.Set<MyData>().ToListAsync();
}
}
In my views, it is now possible to subscribe to the OnChange event and freely use the MyData property.
<table class="table">
<thead>
<tr>
<!-- ... -->
</tr>
</thead>
<tbody>
#foreach (var data in SharedStateService.MyData)
{
<tr>
<!-- ... -->
</tr>
}
</tbody>
</table>
#code {
public void Dispose()
{
SharedStateService.OnChange -= Refresh;
}
protected override void OnInitialized()
{
SharedStateService.OnChange += Refresh;
}
private async void Refresh()
{
await InvokeAsync(this.StateHasChanged);
}
}
The problem I see with this case is that the entire data is constantly stored on the server. Might there be any problems? Am I overthinking it too much? What could possible risks of such an approach be?
Singleton Event
It is similar to the singleton state, but I do not store the data anywhere. Instead of the state, I have a service, which only provides an event, which can be subscribed to. This service is, again, added as a singleton.
public class RefreshService
{
public event Action OnChange;
public void Refresh()
{
OnChange?.Invoke();
}
}
This service is then injected into the data providers and called, when a change occur.
I extend the MyDataService by a new method.
public async Task<ICollection<MyData>> GetAll()
{
return await this.context.Set<MyData>().ToListAsync();
}
Afterwards, in my view, I add a property and adjust the Refresh method, to load the data into this local property.
private async void Refresh()
{
this.MyData= await MyDataService.GetAll();
await InvokeAsync(this.StateHasChanged);
}
This approach is very similar to the first one, but I don't need to store the data constantly. Is this approach easier to handle for the server? Could this lead to wrong data displayed, since the data is stored redundantly?
I know that this is a long read, but maybe someone knows which approach is generally preferable over the other.
Listen to data change it's not a bad idea, the only think i would get focus on it's the way you delete and change. First i will improve on use EFCoreBulkExtensions just for performance, if you will be updating / deleting data everytime, it's not a bad idea to perform that (principally because your database will grow as time goes by).
And what i think it's the proper solution it's the second one, Singleton Event , that way allow's you to prevent the possible error that could make the first one. Think in this scenario: you have 1000 users, it's probably that most of your users where interacting with the data at same time. If you delete, and then refresh the data could make data inconsistency, but if you get the event change state, you could use it as a flag, that data needs to be updated before user interacts with it.
Finally, i think you could use BulkInsertOrUpdateOrDelete method, so if data doesn't exists (with their id), you insert it, if any changes, it get's updates, and if it doesn't exists (an existing id) you delete it, all with one optimized method of bulk extensions. And in case you can't add another library, you should make your own add/update/delete method!

Cassandra Async reads and writes, Best practices

To Set the context,
We have 4 tables in cassandra, out of those 4, one is data table remaining are search tables (Lets assumme DATA, SEARCH1, SEARCH2 and SEARCH3 are the tables).
We have an initial load requirement with upto 15k rows in one req for the DATA table and hence to the search tables to keep in sync.
We do it in batch inserts with each bacth as 4 queries (one to each table) to keep consistency.
But for every batch we need to read the data. If exists, just update only the DATA table's lastUpdatedDate column, else insert to all the 4 tables.
And below is the code snippet how we are doing:
public List<Items> loadData(List<Items> items) {
CountDownLatch latch = new CountDownLatch(items.size());
ForkJoinPool pool = new ForkJoinPool(6);
pool.submit(() -> items.parallelStream().forEach(item -> {
BatchStatement batch = prepareBatchForCreateOrUpdate(item);
batch.setConsistencyLevel(ConsistencyLevel.LOCAL_ONE);
ResultSetFuture future = getSession().executeAsync(batch);
Futures.addCallback(future, new AsyncCallBack(latch), pool);
}));
try {
latch.await();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
//TODO Consider what to do with the failed Items, Retry? or remove from the items in the return type
return items;
}
private BatchStatement prepareBatchForCreateOrUpdate(Item item) {
BatchStatement batch = new BatchStatement();
Item existingItem = getExisting(item) //synchronous read
if (null != data) {
existingItem.setLastUpdatedDateTime(new Timestamp(System.currentTimeMillis()));
batch.add(existingItem));
return batch;
}
batch.add(item);
batch.add(convertItemToSearch1(item));
batch.add(convertItemToSearch2(item));
batch.add(convertItemToSearch3(item));
return batch;
}
class AsyncCallBack implements FutureCallback<ResultSet> {
private CountDownLatch latch;
AsyncCallBack(CountDownLatch latch) {
this.latch = latch;
}
// Cooldown the latch for either success or failure so that the thread that is waiting on latch.await() will know when all the asyncs are completed.
#Override
public void onSuccess(ResultSet result) {
latch.countDown();
}
#Override
public void onFailure(Throwable t) {
LOGGER.warn("Failed async query execution, Cause:{}:{}", t.getCause(), t.getMessage());
latch.countDown();
}
}
The execution is taking about 1.5 to 2 mins for 15k items considering the network roundtrip b/w application and cassandra cluster(Both reside on same DNS but different pods on kubernetes)
we have ideas to make even the read call getExisting(item) also async, but handling of the failure cases is becoming complex.
Is there a better approach for data loads for cassandra(Considering only the Async wites through datastax enterprise java driver).
First thing - batches in Cassandra are other things than in the relational DBs. And by using them you're putting more load on the cluster.
Regarding the making everything async, I thought about following possibility:
make query to the DB, obtain a Future and add listener to it - that will be executed when query is finished (override the onSuccess);
from that method, you can schedule the execution of the next actions based on the result that is obtained from Cassandra.
One thing that you need to make sure to check, is that you don't issue too much simultaneous requests at the same time. In the version 3 of the protocol, you can have up to 32k in-flight requests per connection, but in your case you may issue up to 60k (4x15k) requests. I'm using following wrapper around Session class to limit the number of in-flight requests.

How to prevent static variable became null after application holds for long time

I have developed and released one application in market long ago. Now some some users pointed crashes when holding application for long time. Now I identified the reason for the crash, that is I am using a class with static variable and methods to store data (getters and setters). Now I want to replace the static way with any other ways.From my study I got the following suggestions:
shared preferences: I have to store more than 40 variable (strings, int and json arrays and objects), So I think using shared preferences is not a good idea.
SQLite: more than 40 fields are there and I don't need to keep more than one value at a time.I am getting values for fields from different activities. I mean name from one activity , age from another activity, etc So using SQLite also not a good Idea I think.
Application classes: Now I am thinking about using application classes to store these data. Will it loss the data like static variable after hold the app for long time?
Now I replace the static variable with application class . Please let me know that application data also became null after long time?
It may useful to somebody.
Even though I didn't get a solution for my problem, I got the reason for why shouldn't we use application objects to hold the data. Please check the below link
Don't use application object to store data
Normally if you have to keep something in case your Activity gets destroyed you save all these things in onSaveInstanceState and restore them in onCreate or in onRestoreInstanceState
public class MyActivity extends Activity {
int myVariable;
final String ARG_MY_VAR="myvar";
public void onCreate(Bundle savedState) {
if(savedState != null {
myVariable = savedState.getInt(ARG_MY_VAR);
} else {
myVariable = someDefaultValue;
}
public void onSaveInstanceState(Bundle outState) {
outState.putInt(ARG_MY_VAR, myVariable);
super.onSaveInstanceState(outState);
}
}
Here if Android OS destroys your Activity onSaveInstanceState will be called and your important variable will be saved. Then when the user returns to your app again Android OS restores the activity and your variable will be correctly initialized
This does not happen when you call finish() yourself though, it happens only when Android destroys your activity for some reasons (which is quite likely to happen anytime while your app is in background).
First you should overwrite the onSaveInstanceState and onRestoreInstanceState methods in you activity:
#Override
protected void onSaveInstanceState (Bundle outState){
outState.putString("myVariable", myVariable);
// Store all you data inside the bundle
}
#Override
protected void onRestoreInstanceState (Bundle savedInstanceState){
if(savedInstanceState != null){
myVariable = savedInstanceState.getString("myVariable");
// Restore all the variables
}
}
May be try use static variable inside Application space?
public class YourApplication extends Application
{
private static ApplicationBubblick singleton;
public String str;
public static YourApplication getInstance()
{
return singleton;
}
}
And use variable via:
YourApplication.getInstance().str = ...; // set variable
... = YourApplication.getInstance().str; // get variable
This variable will be live until your app will start and stop all services or activities of your app. This is not work when your app crash.

Multiple readers and multiple writers(i mean multiple) synchronization

I am developing a feature that needs a variant of read/write lock that can allow concurrent multiple writers.
Standard read/write lock allows either multiple readers or single writer to run concurrently. I need a variant that can allow multiple readers or multiple writers concurrently. So, it should never allow a reader and a writer concurrently. But, its okay to allow multiple writers at the same time or multiple readers at the same time.
I hope I was clear. I couldn't find any existing algorithm so far. I can think of couple of ways to do this using some queues and etc. But, I dont want to take a risk of doing it myself unless none exists.
Do you guys know of any existing scheme?
Thanks,
The concept you are looking for is a Reentrant lock. You need to be able to try to acquire the lock and not get blocked if the lock is already taken (this is known as reentrant lock). There is a native implementation of a reentrant lock in java so I will illustrate this example in Java. (http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/locks/ReentrantLock.html).
Because when using tryLock() you don't get blocked if the lock is not available your writer/reader can proceed. However, you only want to release the lock when you're sure that no one is reading/writing anymore, so you will need to keep the count of readers and writers. You will either need to synchronize this counter or use a native atomicInteger that allows atomic increment/decrement. For this example I used atomic integer.
Class ReadAndWrite {
private ReentrantLock readLock;
private ReentrantLock writeLock;
private AtomicInteger readers;
private AtomicInteger writers;
private File file;
public void write() {
if (!writeLock.isLocked()) {
readLock.tryLock();
writers.incrementAndGet(); // Increment the number of current writers
// ***** Write your stuff *****
writers.decrementAndGet(); // Decrement the number of current writers
if (readLock.isHeldByCurrentThread()) {
while(writers != 0); // Wait until all writers are finished to release the lock
readLock.unlock();
}
} else {
writeLock.lock();
write();
}
}
public void read() {
if (!readLock.isLocked()) {
writeLock.tryLock();
readers.incrementAndGet();
// ***** read your stuff *****
readers.decrementAndGet(); // Decrement the number of current read
if (writeLock.isHeldByCurrentThread()) {
while(readers != 0); // Wait until all writers are finished to release the lock
writeLock.unlock();
}
} else {
readLock.lock();
read();
}
}
What's happening here: First you check if your lock is locked to know if you can perform the action you're going to perform. If it's locked it means you can't read or write so you use lock to put yourself in wait state and re-call the same action when the lock is freed again.
If it's not locked, then you lock the other action (if you're going to read you lock writes and vice-versa) using tryLock. tryLock doesn't block if it's already locked, so several writers can write at the same time and several readers can read at the same time. When the number of threads doing the same thing as you reaches 0 it means that whoever held the lock in the first place can now release it. The only inconvenience with this solution is that the thread that holds the lock will have to stay alive until everyone is finished to be able to release it.
If you are using pthreads, take a look at the synchronization approach in this question.
You could use a similar approach with two variables readerCount and writerCount and a mutex.
In a reader thread you would lock the mutex and wait for writerCount == 0. If this is condition is met, you increment the readerCount by 1 and release the lock. Then you do the reading. When you are done, you lock the mutex again, decrement the readerCount, signal the condition change and release the lock.
The writer thread follows the same logic but waits for the condition readerCount == 0 and increments/decrements writerCount instead.
I did have a solution along the lines of nifs comment. I have posted my solution below. The problem is with fairness policy. Starvation can easily happen. In my approach, one kind of thread is less likely than other. So I am just getting away with giving priority to girls. Ideally we want this to be with some decent fairness policy.
/**
* RestRoomLock:
*
* This lock tries to simulate a gender based access to common rest room.
* It is okay to have multiple boys or multiple girls inside the room. But,
* we can't have boys and girls at the same time inside the room.
*
* This implementation doesn't really have proper fairness policy. For now,
* girls are being treated with priority as long as boys are being gentle,
* boyEntryBeGentle();
*
* #author bmuppana
*/
public class RestRoomLock {
int boysInside;
int girlsInside;
int girlsWaiting;
RestRoomLock() {
boysInside = girlsInside = girlsWaiting = 0;
}
public synchronized void boyEntry() {
while (girlsInside > 0) {
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
boysInside++;
}
public synchronized void boyEntryBeGentle() {
while (girlsInside + girlsWaiting > 0) {
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
boysInside++;
}
public synchronized void boyExit() {
boysInside--;
assert boysInside >= 0;
notifyAll();
}
public synchronized void girlEntry() {
girlsWaiting++;
while (boysInside > 0) {
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
girlsWaiting--;
girlsInside++;
}
public synchronized void girlExit() {
girlsInside--;
assert girlsInside >= 0;
notifyAll();
}
}

How and where to store nhibernate session in winforms per request

Background
I've read all kinds of blogs and documentation about nhibernate session management. My issue, is I need it for both winforms and webforms. That's right, I'm using the same data layer in both a winforms (windows .exe) and webforms (asp.net web) application. I've read a little about the unit of work pattern and is a good choice for winforms. Storing the nhibernate session in HttpRequest.Current.Items seems like a good way to go for web apps. But what about a combo deal? I have web apps, windows apps, and WCF services that all need to use the same nhibernate data layer. So back to my question...
I plan on using this design: NhibernateBestPractices in my web app like so:
private ISession ThreadSession {
get {
if (IsInWebContext()) {
return (ISession)HttpContext.Current.Items[SESSION_KEY];
}
else {
return (ISession)CallContext.GetData(SESSION_KEY);
}
}
set {
if (IsInWebContext()) {
HttpContext.Current.Items[SESSION_KEY] = value;
}
else {
CallContext.SetData(SESSION_KEY, value); // PROBLEM LINE HERE!!!
}
}
}
The Problem
The problem I am having when using this code in my windows app, is with the line
CallContext.SetData(SESSION_KEY, value);
If I understand CallContext() right, this will keep the session open the entire lifetime of my windows app because it stores the nhibernate session as part of the main applications thread. And I've heard all kinds of bad things about keeping an nhibernate session open for too long and I know by design, it's not mean to stay open very long. If all my assumptions are correct, then the above line of code is a no,no.
Given all this, I'd like to replace the above line with something that will destroy the nhibernate session more frequently in a windows app. Something similar to the lifetime of the HttpRequest. I don't want to leave it up to the windows client to know about the nhibernate session (or transaction) and when to open and close it. I'd like this to be triggered automagically.
The Question
So, where can I store the nhibernate session in a windows app that will allow me (ie. something besides the client) to automatically begin and end a transaction on a per database request (that is, whenever a client makes a call to the DB)?
** Update **
Here are 2 more links on how to implement the unit of work pattern
http://msdn.microsoft.com/en-us/magazine/dd882510.aspx
http://www.codeinsanity.com/2008/09/unit-of-work-pattern.html
Each of your apps can provide a common implementation of an interface like IUnitOfWorkStorage
public interface IUnitOfWorkStorage
{
void StoreUnitOfWork(IUnitOfWork uow);
}
IUnitOfWork can be a wrapper around the ISession which can look like this
public interface IUnitOfWork
{
void Begin();
void End();
}
Begin might open the session and start a transaction, while End would commit the transaction and close the session. So you can have 2 implementations of IUnitOfWorkStorage, one for the WebApp and one for the Windows App. The web app can use HttpContext.Current or something and your windows app can provide just a simple object store that is disposed at the end of your action which would End the UnitOfWork.
I ended up using the following code. The only "burden" it put on my app was the unit tests, and I'd rather muck up that code with session specific info that the production code. I kept the same code as mentioned in my question and then added this class to my unit test project:
namespace MyUnitTests
{
/// <summary>
/// Simulates the IHttpModule class but for windows apps.
/// There's no need to call BeginSession() and EndSession()
/// if you wrap the object in a "using" statement.
/// </summary>
public class NhibernateSessionModule : IDisposable
{
public NhibernateSessionModule()
{
NHibernateSessionManager.Instance.BeginTransaction();
}
public void BeginSession()
{
NHibernateSessionManager.Instance.BeginTransaction();
}
public void EndSession()
{
NHibernateSessionManager.Instance.CommitTransaction();
NHibernateSessionManager.Instance.CloseSession();
}
public void RollBackSession()
{
NHibernateSessionManager.Instance.RollbackTransaction();
}
#region Implementation of IDisposable
public void Dispose()
{
// if an Exception was NOT thrown then commit the transaction
if (Marshal.GetExceptionCode() == 0)
{
NHibernateSessionManager.Instance.CommitTransaction();
}
else
{
NHibernateSessionManager.Instance.RollbackTransaction();
}
CloseSession();
}
#endregion
}
}
And to use the above class you'd do something like this:
[Test]
public void GetByIdTest()
{
// begins an nhibernate session and transaction
using (new NhibernateSessionModule())
{
IMyCustomer myCust = MyCustomerDao.GetById(123);
Assert.IsNotNull(myCust);
} // ends the nhibernate transaction AND the session
}
Note: If you're using this method make to sure to not wrap your sessions in "using" statements when executing queries from your Dao classes like in this post. Because you're managing sessions yourself and keeping them open a littler longer that a single session per query, then you need to get rid of all the places you are closing the session and let the NhibernateSessionModule do that for you (web apps or windows apps).

Resources