Session compression. Negative and positive sides - asp.net

In web.config you can enable session compression.
<sessionState mode="InProc" customProvider="DefaultSessionProvider" compressionEnabled="true" >
What are positive and negative sides of this action?

Well, on the positive side, you need less space.
On the negative side, it needs time to compress, so it's slower.
Let me add, that in my opinion, if you use sessions at all, you've made an architectural mistake (exceptions my apply to this rule, but very very rarely).
It's not a good idea, because if a page writes something in a session, this gets overwritten if I simultanously open the same page in another browser window (it's the same session).
And because InProc sessions expire when you change something in the web.config file, you can create unlimited number of bugs for EVERY currently active user...
Plus you loose inProc sessions, if the VM gets moved to another server (cloud environments, failover, dynamic scaleOut).
Also, the InProc provider doesn't require objects to be marked as serializable.
If you change to, for example, an SQL session provider, you'll get exceptions in all places where you put an object that hasn't been marked as serializable into the session.
For example, when you need to query all the locations a user may access (according to portofolio rights in T_SYS_LocationRights):
You get the UserID from the formsAuth-cookie, then use it as the parameter:
DECLARE #userID integer
SET #userID = 12435
SELECT * FROM T_Locations
WHERE (1=1)
AND
(
(
SELECT ISNULL(MAX(CAST(T_SYS_LocationRights.LR_IsRead AS integer)), 0)
FROM T_SYS_LocationRights
INNER JOIN T_User_Groups
ON T_User_Groups.USRGRP_GRP = T_SYS_LocationRights.LR_GRANTEE_ID
WHERE T_SYS_LocationRights.LR_LC_UID = T_Locations.LC_UID
AND T_User_Groups.USRGRP_USR = #userID
) = 1
)
Don't just query something after the maxim:
if you'll ever need it, it's already there.
Design a web-application (which is multi-threaded by design) after that maxim, is a very bad idea.
If you don't need it, don't query it.
If you need it, query it.
If you needed it, don't store it in the session, it's better to query it again, if necessary
You can win much more time by executing all database operations at once, get all the data you need into a System.Data.DataSet (in one query-operation, one connection open-and-close), and then use that. When the page reloads, you can always reload the data (as a matter of fact, you even should).
Don't use the session as cache. It's not the cache

Related

MDriven ECO_ID duplicates

We appear to have a problem with MDriven generating the same ECO_ID for multiple objects. For the most part it seems to happen in conjunction with unexpected process shutdowns and/or server shutdowns, but it does also happen during normal activity.
Our system consists of one ASP.NET application and one WinForms application. The ASP.NET app is setup in IIS to use a single worker process. We have a mixture of WebForms and MVC, including ApiControllers. We're using a rather old version of the ECO packages: 7.0.0.10021. We're on VS 2017, target framework is 4.7.1.
We have it configured to use 64 bit integers for object id:s. Database is Firebird. SQL configuration is set to use ReadCommitted transaction isolation.
As far as I can tell we have configured EcoSpaceStrategyHandler with EcoSpaceStrategyHandler.SessionStateMode.Never, which should mean that EcoSpaces are not reused at all, right? (Why would I even use EcoSpaceStrategyHandler in this case, instead of just creating EcoSpace normally with the new keyword?)
We have created MasterController : Controller and MasterApiController : ApiController classes that we use for all our controllers. These have a EcoSpace property that simply does this:
if (ecoSpace == null)
{
if (ecoSpaceStrategyHandler == null)
ecoSpaceStrategyHandler = new EcoSpaceStrategyHandler(
EcoSpaceStrategyHandler.SessionStateMode.Never,
typeof(DiamondsEcoSpace),
null,
false
);
ecoSpace = (DiamondsEcoSpace)ecoSpaceStrategyHandler.GetEcoSpace();
}
return ecoSpace;
I.e. if no strategy handler has been created, create one specifying no pooling and no session state persisting of eco spaces. Then, if no ecospace has been fetched, fetch one from the strategy handler. Return the ecospace. Is this an acceptable approach? Why would it be better than simply doing this:
if (ecoSpace = null)
ecoSpace = new DiamondsEcoSpace();
return ecoSpace;
In aspx we have a master page that has an EcoSpaceManager. It has been configured to use a pool but SessionStateMode is Never. It has EnableViewState set to true. Is this acceptable? Does it mean that EcoSpaces will be pooled but inactivated between round trips?
It is possible that we receive multiple incoming API calls in tight succession, so that one API call hasn't been completed before the next one comes in. I assume that this means that multiple instances of MasterApiController can execute simultaneously but in separate threads. There may of course also be MasterController instances executing MVC requests and also the WinForms app may be running some batch job or other.
But as far as I understand id reservation is made at the beginning of any UpdateDatabase call, in this way:
update "ECO_ID" set "BOLD_ID" = "BOLD_ID" + :N;
select "BOLD_ID" from "ECO_ID";
If the returned value is K, this will reserve N new id:s ranging from K - N to K - 1. Using ReadCommitted transactions everywhere should ensure that the update locks the id data row, forcing any concurrent save operations to wait, then fetches the update result without interference from other transactions, then commits. At that point any other pending save operation can proceed with its own id reservation. I fail to see how this could result in the same ID being used for multiple objects.
I should note that it does seem like it sometimes produces id duplicates within one single UpdateDatabase, i.e. when saving a set of new related objects, some of them end up with the same id. I haven't really confirmed this though.
Any ideas what might be going on here? What should I look for?
The issue is most likely that you use ReadCommitted isolation.
This allows for 2 systems to simultaneously start a transaction, read the current value, increase the batch, and then save after each other.
You must use Serializable isolation for key generation; ie only read things not currently in a write operation.
MDriven use 2 settings for isolation level UpdateIsolationLevel and FetchIsolationLevel.
Set your UpdateIsolationLevel to Serializable

Does any asp.net data cache support background population of cache entries?

We have a data driven ASP.NET website which has been written using the standard pattern for data caching (adapted here from MSDN):
public DataTable GetData()
{
string key = "DataTable";
object item = Cache[key] as DataTable;
if((item == null)
{
item = GetDataFromSQL();
Cache.Insert(key, item, null, DateTime.Now.AddSeconds(300), TimeSpan.Zero;
}
return (DataTable)item;
}
The trouble with this is that the call to GetDataFromSQL() is expensive and the use of the site is fairly high. So every five minutes, when the cache drops, the site becomes very 'sticky' while a lot of requests are waiting for the new data to be retrieved.
What we really want to happen is for the old data to remain current while new data is periodically reloaded in the background. (The fact that someone might therefore see data that is six minutes old isn't a big issue - the data isn't that time sensitive). This is something that I can write myself, but it would be useful to know if any alternative caching engines (I know names like Velocity, memcache) support this kind of scenario. Or am I missing some obvious trick with the standard ASP.NET data cache?
You should be able to use the CacheItemUpdateCallback delegate which is the 6th parameter which is the 4th overload for Insert using ASP.NET Cache:
Cache.Insert(key, value, dependancy, absoluteExpiration,
slidingExpiration, onUpdateCallback);
The following should work:
Cache.Insert(key, item, null, DateTime.Now.AddSeconds(300),
Cache.NoSlidingExpiration, itemUpdateCallback);
private void itemUpdateCallback(string key, CacheItemUpdateReason reason,
out object value, out CacheDependency dependency, out DateTime expiriation,
out TimeSpan slidingExpiration)
{
// do your SQL call here and store it in 'value'
expiriation = DateTime.Now.AddSeconds(300);
value = FunctionToGetYourData();
}
From MSDN:
When an object expires in the cache,
ASP.NET calls the
CacheItemUpdateCallback method with
the key for the cache item and the
reason you might want to update the
item. The remaining parameters of this
method are out parameters. You supply
the new cached item and optional
expiration and dependency values to
use when refreshing the cached item.
The update callback is not called if
the cached item is explicitly removed
by using a call to Remove().
If you want the cached item to be
removed from the cache, you must
return null in the expensiveObject
parameter. Otherwise, you return a
reference to the new cached data by
using the expensiveObject parameter.
If you do not specify expiration or
dependency values, the item will be
removed from the cache only when
memory is needed.
If the callback method throws an
exception, ASP.NET suppresses the
exception and removes the cached
value.
I haven't tested this so you might have to tinker with it a bit but it should give you the basic idea of what your trying to accomplish.
I can see that there's a potential solution to this using AppFabric (the cache formerly known as Velocity) in that it allows you to lock a cached item so it can be updated. While an item is locked, ordinary (non-locking) Get requests still work as normal and return the cache's current copy of the item.
Doing it this way would also allow you to separate out your GetDataFromSQL method to a different process, say a Windows Service, that runs every five minutes, which should alleviate your 'sticky' site.
Or...
Rather than just caching the data for five minutes at a time regardless, why not use a SqlCacheDependency object when you put the data into the cache, so that it'll only be refreshed when the data actually changes. That way you can cache the data for longer periods, so you get better performance, and you'll always be showing the up-to-date data.
(BTW, top tip for making your intention clearer when you're putting objects into the cache - the Cache has a NoSlidingExpiration (and a NoAbsoluteExpiration) constant available that's more readable than your Timespan.Zero)
First, put the date you actually need in a lean class (also known as POCO) instead of that DataTable hog.
Second, use cache and hash - so that when your time dependency expires you can spawn an async delegate to fetch new data but your old data is still safe in a separate hash table (not Dictionary - it's not safe for multi-reader single writer threading).
Depending on the kind of data and the time/budget to restructure SQL side you could potentially fetch only things that have LastWrite younger that your update window. you will need 2-step update (have to copy dats from the hash-kept opject into new object - stuff in hash is strictly read-only for any use or the hell will break loose).
Oh and SqlCacheDependency is notorious for being unreliable and can make your system break into mad updates.

Make a final call to the Database when user leaves website (ASPX)?

I have a system set up to lock certain content in a database table so only one user can edit that content at a time. Easy enough and that part is working fine. But now I'm at a road block of how to send a request to "unlock" the content. I have the stored procedure to unlock the content, but how/where would I call it when the user just closes their browser?
You also can't know when the user turns off his computer. You have to do it the other way around.
Require that the lock be renewed periodically. Only the web site would do the periodic renewal. If the user stops using the web site, then the lock expires.
Otherwise, require the user to explicitly unlock the content. Other users who want to edit the content can then go yell at the first user when they can't do their jobs. Not a technological solution, but still a good one. Shame works.
The best thing you can really do is add something to your Session_End in your global.asax. Unfortunately, this won't fire until the session times out.
When the user clicks the "X" in their browser, there isn't anyway to guarantee the browser will send you anything back.
A quick note on the Session_End approaches. If you use this method, then you have to ensure
That sessionstate is InProc, eg. add something like this to your Web.config
<sessionState mode="InProc" timeout="timeout_in_minutes"/>
Make sure that you've setup IIS as to not recycle worker processes during normal operation (see for instance this blog post).
Edit:
Not directly answering the question directly, but another approach would be to use Optimistic concurrency control on the data in question.
There is such event as "user closes browser".
Nevertheless, I can think of two workarounds:
Use Javascript/Ajax to permanently
(lets say every 10 seconds) call a
method in your page. The DateTime of
your last query needs to be stored
somewhere. Now you write a windows
service that checks every second
which session are timed out. Perform
your custom action there.
Use the global.asax Session_End()
-Event. (cannot be used with every SessionState, look up for which ones
it is usable)
Trying to leave a stackoverflow answer page pops up an "are you sure" dialog. Perhaps during the on-page-leave event that SO uses (or however SO does this), you can send a final request with an XmlHttpRequest object. This won't cover if the browser process closes unexpectedly (use session_onend for that), but it will at least send the "I'm closed" event earlier
I think your one stored procedure can do the locking and unlocking (used with "Select #strNewMax As NewMax")...
Here is an example from a system I have:
Declare #strNewMax Char
Select #strNewMax = 'N'
BEGIN TRANSACTION
/* Lock only the rows for this Item ID, and hold those locks throughout the transaction. */
If #BidAmount > (Select Max(AB_Bid_AMT) from AuctionBid With(updlock, holdlock) Where AB_AI_ID = #AuctionItemId)
Begin
Insert Into AuctionBid (AB_AI_ID, AB_Bid_AMT, AB_Emp_ID, AB_Entry_DTM)
Select #AuctionItemId, #BidAmount, #EmployeeId, GetDate()
Select #strNewMax = 'Y'
End
COMMIT TRANSACTION
Select #strNewMax As NewMax
This will insert a record as the next highest bid, all while locking the entire table, so no other bids are processed at the same time. It will return either a 'Y' or 'N' depending on if it worked or not.
Maybe you can take this and adjust it to fit your application.

Httpruntime cache keys not unique?

Although i have specified a unique key, it seems the following code will return one value for 5 requests, then another for the next couple, then revert back to the value saved in the original request and just continue until there are 10's of different objects all stored under the same key.
It then seems almost random which of these values it will return from the cache.
string strDateTime = string.Empty;
string cachename = "datetimeexample";
object cachedobject = HttpRuntime.Cache.Get(cachename);
if (cachedobject != null)
strDateTime = (string)cachedobject;
else
{
strDateTime = DateTime.Now.ToString();
HttpRuntime.Cache.Insert(cachename, strDateTime, null, DateTime.MaxValue, TimeSpan.FromDays(10), CacheItemPriority.NotRemovable, null);
}
Response.Write(strDateTime +" keys:"+ HttpRuntime.Cache.Count);
Very confused, is this because of threading or something?
Ignoring the possibility of a server farm and load balancing, this behaviour can be caused by the application pool running as a web-garden. To quote the relevant section from MSDN:
Because Web gardens enable the use of
multiple processes, each process will
have its own copy of application
state, in-process session state,
caches, and static data. Web gardens
should not be used for all
applications, especially if they need
to maintain state. Be sure to
benchmark the performance of the
application before deciding whether
Web garden mode is appropriate.
This will cause it to appear as if caching is storing multiple values for the same key, effectively having duplicate entries in the cache.
To resolve this in IIS 7, open the application pool's Advanced Settings and set Maximum Worker Processes to 1. For IIS 6, see the MSDN article (With pretty screenshots).
Albeit 8 months late, I'm answering this question because I found it long before I found this decent article on web-garden gotchas. Hopefully this answer will save future searchers a chunk of time. :)
Your cachekey is always 'datetimeexample', therefore, you will always have one object in cache; and you will always receive that object back.
I am not quite sure what you are trying to accomplish here, as far as I'm concerned, this behaves exactly in the way it's supposed to do.

Need suggestion for ASP.Net in-memory queue

I've a requirement of creating a HttpHandler that will serve an image file (simple static file) and also it'll insert a record in the SQL Server table. (e.g http://site/some.img, where some.img being a HttpHandler) I need an in-memory object (like Generic List object) that I can add items to on each request (I also have to consider a few hundreds or thousands requests per second) and I should be able unload this in-memory object to sql table using SqlBulkCopy.
List --> DataTable --> SqlBulkCopy
I thought of using the Cache object. Create a Generic List object and save it in the HttpContext.Cache and insert every time a new Item to it. This will NOT work as the CacheItemRemovedCallback would fire right away when the HttpHandler tries to add a new item. I can't use Cache object as in-memory queue.
Anybody can suggest anything? Would I be able to scale in the future if the load is more?
Why would CacheItemRemovedCalledback fire when you ADD something to the queue? That doesn't make sense to me... Even if that does fire, there's no requirement to do anything here. Perhaps I am misunderstanding your requirements?
I have quite successfully used the Cache object in precisely this manner. That is what it's designed for and it scales pretty well. I stored a Hashtable which was accessed on every app page request and updated/cleared as needed.
Option two... do you really need the queue? SQL Server will scale pretty well also if you just want to write directly into the DB. Use a shared connection object and/or connection pooling.
How about just using the Generic List to store requests and using different thread to do the SqlBulkCopy?
This way storing requests in the list won't block the response for too long, and background thread will be able to update the Sql on it's own time, each 5 min so.
you can even base the background thread on the Cache mechanism by performing the work on CacheItemRemovedCallback.
Just insert some object with remove time of 5 min and reinsert it at the end of the processing work.
Thanks Alex & Bryan for your suggestions.
Bryan: When I try to replace the List object in the Cache for the second request (now, count should be 2), the CacheItemRemovedCalledback gets fire as I'm replacing the current Cache object with the new one. Initially, I also thought this is weird behavior so I gotta look deeper into it.
Also, for the second suggestion, I will try to insert record (with the Cached SqlConnection object) and see what performance I get when I do the stress test. I doubt I'll be getting fantastic numbers as it's I/O operation.
I'll keep digging on my side for an optimal solution meanwhile with your suggestions.
You can create a conditional requirement within the callback to ensure you are working on a cache entry that has been hit from an expiration instead of a remove/replace (in VB since I had it handy):
Private Shared Sub CacheRemovalCallbackFunction(ByVal cacheKey As String, ByVal cacheObject As Object, ByVal removalReason As Web.Caching.CacheItemRemovedReason)
Select Case removalReason
Case Web.Caching.CacheItemRemovedReason.Expired, Web.Caching.CacheItemRemovedReason.DependencyChanged, Web.Caching.CacheItemRemovedReason.Underused
' By leaving off Web.Caching.CacheItemRemovedReason.Removed, this will exclude items that are replaced or removed explicitly (Cache.Remove) '
End Select
End Sub
Edit Here it is in C# if you need it:
private static void CacheRemovalCallbackFunction(string cacheKey, object cacheObject, System.Web.Caching.CacheItemRemovedReason removalReason)
{
switch(removalReason)
{
case System.Web.Caching.CacheItemRemovedReason.DependencyChanged:
case System.Web.Caching.CacheItemRemovedReason.Expired:
case System.Web.Caching.CacheItemRemovedReason.Underused:
// This excludes the option System.Web.Caching.CacheItemRemovedReason.Removed, which is triggered when you overwrite a cache item or remove it explicitly (e.g., HttpRuntime.Cache.Remove(key))
break;
}
}
To expand on my previous comment... I get the picture you are thinking about the cache incorrectly. If you have an object stored in the Cache, say a Hashtable, any update/storage into that Hashtable will be persisted without you explicitly modifying the contents of the Cache. You only need to add the Hashtable to the Cache once, either at application startup or on the first request.
If you are worried about the bulkcopy and page request updates happening simultaneously, then I suggest you simple have TWO cached lists. Have one be the list which is updated as page requests come in, and one list for the bulk copy operation. When one bulk copy is finished, swap the lists and repeat. This is similar to double-buffering video RAM for video games or video apps.

Resources