Httpruntime cache keys not unique? - asp.net

Although i have specified a unique key, it seems the following code will return one value for 5 requests, then another for the next couple, then revert back to the value saved in the original request and just continue until there are 10's of different objects all stored under the same key.
It then seems almost random which of these values it will return from the cache.
string strDateTime = string.Empty;
string cachename = "datetimeexample";
object cachedobject = HttpRuntime.Cache.Get(cachename);
if (cachedobject != null)
strDateTime = (string)cachedobject;
else
{
strDateTime = DateTime.Now.ToString();
HttpRuntime.Cache.Insert(cachename, strDateTime, null, DateTime.MaxValue, TimeSpan.FromDays(10), CacheItemPriority.NotRemovable, null);
}
Response.Write(strDateTime +" keys:"+ HttpRuntime.Cache.Count);
Very confused, is this because of threading or something?

Ignoring the possibility of a server farm and load balancing, this behaviour can be caused by the application pool running as a web-garden. To quote the relevant section from MSDN:
Because Web gardens enable the use of
multiple processes, each process will
have its own copy of application
state, in-process session state,
caches, and static data. Web gardens
should not be used for all
applications, especially if they need
to maintain state. Be sure to
benchmark the performance of the
application before deciding whether
Web garden mode is appropriate.
This will cause it to appear as if caching is storing multiple values for the same key, effectively having duplicate entries in the cache.
To resolve this in IIS 7, open the application pool's Advanced Settings and set Maximum Worker Processes to 1. For IIS 6, see the MSDN article (With pretty screenshots).
Albeit 8 months late, I'm answering this question because I found it long before I found this decent article on web-garden gotchas. Hopefully this answer will save future searchers a chunk of time. :)

Your cachekey is always 'datetimeexample', therefore, you will always have one object in cache; and you will always receive that object back.
I am not quite sure what you are trying to accomplish here, as far as I'm concerned, this behaves exactly in the way it's supposed to do.

Related

MDriven ECO_ID duplicates

We appear to have a problem with MDriven generating the same ECO_ID for multiple objects. For the most part it seems to happen in conjunction with unexpected process shutdowns and/or server shutdowns, but it does also happen during normal activity.
Our system consists of one ASP.NET application and one WinForms application. The ASP.NET app is setup in IIS to use a single worker process. We have a mixture of WebForms and MVC, including ApiControllers. We're using a rather old version of the ECO packages: 7.0.0.10021. We're on VS 2017, target framework is 4.7.1.
We have it configured to use 64 bit integers for object id:s. Database is Firebird. SQL configuration is set to use ReadCommitted transaction isolation.
As far as I can tell we have configured EcoSpaceStrategyHandler with EcoSpaceStrategyHandler.SessionStateMode.Never, which should mean that EcoSpaces are not reused at all, right? (Why would I even use EcoSpaceStrategyHandler in this case, instead of just creating EcoSpace normally with the new keyword?)
We have created MasterController : Controller and MasterApiController : ApiController classes that we use for all our controllers. These have a EcoSpace property that simply does this:
if (ecoSpace == null)
{
if (ecoSpaceStrategyHandler == null)
ecoSpaceStrategyHandler = new EcoSpaceStrategyHandler(
EcoSpaceStrategyHandler.SessionStateMode.Never,
typeof(DiamondsEcoSpace),
null,
false
);
ecoSpace = (DiamondsEcoSpace)ecoSpaceStrategyHandler.GetEcoSpace();
}
return ecoSpace;
I.e. if no strategy handler has been created, create one specifying no pooling and no session state persisting of eco spaces. Then, if no ecospace has been fetched, fetch one from the strategy handler. Return the ecospace. Is this an acceptable approach? Why would it be better than simply doing this:
if (ecoSpace = null)
ecoSpace = new DiamondsEcoSpace();
return ecoSpace;
In aspx we have a master page that has an EcoSpaceManager. It has been configured to use a pool but SessionStateMode is Never. It has EnableViewState set to true. Is this acceptable? Does it mean that EcoSpaces will be pooled but inactivated between round trips?
It is possible that we receive multiple incoming API calls in tight succession, so that one API call hasn't been completed before the next one comes in. I assume that this means that multiple instances of MasterApiController can execute simultaneously but in separate threads. There may of course also be MasterController instances executing MVC requests and also the WinForms app may be running some batch job or other.
But as far as I understand id reservation is made at the beginning of any UpdateDatabase call, in this way:
update "ECO_ID" set "BOLD_ID" = "BOLD_ID" + :N;
select "BOLD_ID" from "ECO_ID";
If the returned value is K, this will reserve N new id:s ranging from K - N to K - 1. Using ReadCommitted transactions everywhere should ensure that the update locks the id data row, forcing any concurrent save operations to wait, then fetches the update result without interference from other transactions, then commits. At that point any other pending save operation can proceed with its own id reservation. I fail to see how this could result in the same ID being used for multiple objects.
I should note that it does seem like it sometimes produces id duplicates within one single UpdateDatabase, i.e. when saving a set of new related objects, some of them end up with the same id. I haven't really confirmed this though.
Any ideas what might be going on here? What should I look for?
The issue is most likely that you use ReadCommitted isolation.
This allows for 2 systems to simultaneously start a transaction, read the current value, increase the batch, and then save after each other.
You must use Serializable isolation for key generation; ie only read things not currently in a write operation.
MDriven use 2 settings for isolation level UpdateIsolationLevel and FetchIsolationLevel.
Set your UpdateIsolationLevel to Serializable

Session compression. Negative and positive sides

In web.config you can enable session compression.
<sessionState mode="InProc" customProvider="DefaultSessionProvider" compressionEnabled="true" >
What are positive and negative sides of this action?
Well, on the positive side, you need less space.
On the negative side, it needs time to compress, so it's slower.
Let me add, that in my opinion, if you use sessions at all, you've made an architectural mistake (exceptions my apply to this rule, but very very rarely).
It's not a good idea, because if a page writes something in a session, this gets overwritten if I simultanously open the same page in another browser window (it's the same session).
And because InProc sessions expire when you change something in the web.config file, you can create unlimited number of bugs for EVERY currently active user...
Plus you loose inProc sessions, if the VM gets moved to another server (cloud environments, failover, dynamic scaleOut).
Also, the InProc provider doesn't require objects to be marked as serializable.
If you change to, for example, an SQL session provider, you'll get exceptions in all places where you put an object that hasn't been marked as serializable into the session.
For example, when you need to query all the locations a user may access (according to portofolio rights in T_SYS_LocationRights):
You get the UserID from the formsAuth-cookie, then use it as the parameter:
DECLARE #userID integer
SET #userID = 12435
SELECT * FROM T_Locations
WHERE (1=1)
AND
(
(
SELECT ISNULL(MAX(CAST(T_SYS_LocationRights.LR_IsRead AS integer)), 0)
FROM T_SYS_LocationRights
INNER JOIN T_User_Groups
ON T_User_Groups.USRGRP_GRP = T_SYS_LocationRights.LR_GRANTEE_ID
WHERE T_SYS_LocationRights.LR_LC_UID = T_Locations.LC_UID
AND T_User_Groups.USRGRP_USR = #userID
) = 1
)
Don't just query something after the maxim:
if you'll ever need it, it's already there.
Design a web-application (which is multi-threaded by design) after that maxim, is a very bad idea.
If you don't need it, don't query it.
If you need it, query it.
If you needed it, don't store it in the session, it's better to query it again, if necessary
You can win much more time by executing all database operations at once, get all the data you need into a System.Data.DataSet (in one query-operation, one connection open-and-close), and then use that. When the page reloads, you can always reload the data (as a matter of fact, you even should).
Don't use the session as cache. It's not the cache

Does any asp.net data cache support background population of cache entries?

We have a data driven ASP.NET website which has been written using the standard pattern for data caching (adapted here from MSDN):
public DataTable GetData()
{
string key = "DataTable";
object item = Cache[key] as DataTable;
if((item == null)
{
item = GetDataFromSQL();
Cache.Insert(key, item, null, DateTime.Now.AddSeconds(300), TimeSpan.Zero;
}
return (DataTable)item;
}
The trouble with this is that the call to GetDataFromSQL() is expensive and the use of the site is fairly high. So every five minutes, when the cache drops, the site becomes very 'sticky' while a lot of requests are waiting for the new data to be retrieved.
What we really want to happen is for the old data to remain current while new data is periodically reloaded in the background. (The fact that someone might therefore see data that is six minutes old isn't a big issue - the data isn't that time sensitive). This is something that I can write myself, but it would be useful to know if any alternative caching engines (I know names like Velocity, memcache) support this kind of scenario. Or am I missing some obvious trick with the standard ASP.NET data cache?
You should be able to use the CacheItemUpdateCallback delegate which is the 6th parameter which is the 4th overload for Insert using ASP.NET Cache:
Cache.Insert(key, value, dependancy, absoluteExpiration,
slidingExpiration, onUpdateCallback);
The following should work:
Cache.Insert(key, item, null, DateTime.Now.AddSeconds(300),
Cache.NoSlidingExpiration, itemUpdateCallback);
private void itemUpdateCallback(string key, CacheItemUpdateReason reason,
out object value, out CacheDependency dependency, out DateTime expiriation,
out TimeSpan slidingExpiration)
{
// do your SQL call here and store it in 'value'
expiriation = DateTime.Now.AddSeconds(300);
value = FunctionToGetYourData();
}
From MSDN:
When an object expires in the cache,
ASP.NET calls the
CacheItemUpdateCallback method with
the key for the cache item and the
reason you might want to update the
item. The remaining parameters of this
method are out parameters. You supply
the new cached item and optional
expiration and dependency values to
use when refreshing the cached item.
The update callback is not called if
the cached item is explicitly removed
by using a call to Remove().
If you want the cached item to be
removed from the cache, you must
return null in the expensiveObject
parameter. Otherwise, you return a
reference to the new cached data by
using the expensiveObject parameter.
If you do not specify expiration or
dependency values, the item will be
removed from the cache only when
memory is needed.
If the callback method throws an
exception, ASP.NET suppresses the
exception and removes the cached
value.
I haven't tested this so you might have to tinker with it a bit but it should give you the basic idea of what your trying to accomplish.
I can see that there's a potential solution to this using AppFabric (the cache formerly known as Velocity) in that it allows you to lock a cached item so it can be updated. While an item is locked, ordinary (non-locking) Get requests still work as normal and return the cache's current copy of the item.
Doing it this way would also allow you to separate out your GetDataFromSQL method to a different process, say a Windows Service, that runs every five minutes, which should alleviate your 'sticky' site.
Or...
Rather than just caching the data for five minutes at a time regardless, why not use a SqlCacheDependency object when you put the data into the cache, so that it'll only be refreshed when the data actually changes. That way you can cache the data for longer periods, so you get better performance, and you'll always be showing the up-to-date data.
(BTW, top tip for making your intention clearer when you're putting objects into the cache - the Cache has a NoSlidingExpiration (and a NoAbsoluteExpiration) constant available that's more readable than your Timespan.Zero)
First, put the date you actually need in a lean class (also known as POCO) instead of that DataTable hog.
Second, use cache and hash - so that when your time dependency expires you can spawn an async delegate to fetch new data but your old data is still safe in a separate hash table (not Dictionary - it's not safe for multi-reader single writer threading).
Depending on the kind of data and the time/budget to restructure SQL side you could potentially fetch only things that have LastWrite younger that your update window. you will need 2-step update (have to copy dats from the hash-kept opject into new object - stuff in hash is strictly read-only for any use or the hell will break loose).
Oh and SqlCacheDependency is notorious for being unreliable and can make your system break into mad updates.

Default duration of Cache.Insert in ASP.NET

If I have the following line, when should I expect the cache to expire?
System.Web.HttpRuntime.Cache.Insert("someKey", "Test value");
"Never", that is, as soon as memory is low and ASP.NET Cache thinks it has something more important to keep.
This will insert the object without an explicit expiration set. This means the object will not automatically be removed from the cache, unless the runtime decides to remove stuff from the cache due to high memory usage.
Calling this overload is the same as calling
Cache.Insert(
key, value,
null, /*CacheDependency*/
NoAbsoluteExpiration, /*absoluteExpiration*/
NoSlidingExpiration, /*slidingExpiratioin*/
CacheItemPriority.Normal, /*priority*/
null /*onRemoveCallback*/
);
BTW: you can use .NET reflector to find out such things.

Moving ViewState out of the page?

We are trying to lighten our page load as much as possible. Since ViewState can sometimes swell up to 100k of the page, I'd love to completely eliminate it.
I'd love to hear some techniques other people have used to move ViewState to a custom provider.
That said, a few caveats:
We serve on average 2 Million unique visitors per hour.
Because of this, Database reads have been a serious issue in performance, so I don't want to store ViewState in the database.
We also are behind a load balancer, so any solution has to work with the user bouncing from machine to machine per postback.
Ideas?
How do you handle Session State? There is a built-in "store the viewstate in the session state" provider. If you are storing the session state in some fast, out of proc system, that might be the best option for the viewstate.
edit: to do this add the following code to the your Page classes / global page base class
protected override PageStatePersister PageStatePersister {
get { return new SessionPageStatePersister(this); }
}
Also... this is by no means a perfect (or even good) solution to a large viewstate. As always, minimize the size of the viewstate as much as possible. However, the SessionPageStatePersister is relatively intelligent and avoids storing an unbounded number of viewstates per session as well as avoids storing only a single viewstate per session.
I have tested many ways to remove the load of view state from the page and between all hacks and some software out there the only thing that it is truly scalable is the StrangeLoops As10000 appliance. Transparent, no need to change the underlying application.
As previously stated, I have used the database to store the ViewState in the past. Although this works for us, we don't come close to 2 million unique visitors per hour.
I think a hardware solution is definitely the way to go, whether using the StrangeLoop products or another product.
The following works quite well for me:
string vsid;
protected override object LoadPageStateFromPersistenceMedium()
{
Pair vs = base.LoadPageStateFromPersistenceMedium() as Pair;
vsid = vs.First as string;
object result = Session[vsid];
Session.Remove(vsid);
return result;
}
protected override void SavePageStateToPersistenceMedium(object state)
{
if (vsid == null)
{
vsid = Guid.NewGuid().ToString();
}
Session[vsid] = state;
base.SavePageStateToPersistenceMedium(new Pair(vsid, null));
}
You can always compress ViewState so you get the benefits of ViewState without so much bloat:
public partial class _Default : System.Web.UI.Page {
protected override object LoadPageStateFromPersistenceMedium() {
string viewState = Request.Form["__VSTATE"];
byte[] bytes = Convert.FromBase64String(viewState);
bytes = Compressor.Decompress(bytes);
LosFormatter formatter = new LosFormatter();
return formatter.Deserialize(Convert.ToBase64String(bytes));
}
protected override void SavePageStateToPersistenceMedium(object viewState) {
LosFormatter formatter = new LosFormatter();
StringWriter writer = new StringWriter();
formatter.Serialize(writer, viewState);
string viewStateString = writer.ToString();
byte[] bytes = Convert.FromBase64String(viewStateString);
bytes = Compressor.Compress(bytes);
ClientScript.RegisterHiddenField("__VSTATE", Convert.ToBase64String(bytes));
}
// ...
}
using System.IO;
using System.IO.Compression;
public static class Compressor {
public static byte[] Compress(byte[] data) {
MemoryStream output = new MemoryStream();
GZipStream gzip = new GZipStream(output,
CompressionMode.Compress, true);
gzip.Write(data, 0, data.Length);
gzip.Close();
return output.ToArray();
}
public static byte[] Decompress(byte[] data) {
MemoryStream input = new MemoryStream();
input.Write(data, 0, data.Length);
input.Position = 0;
GZipStream gzip = new GZipStream(input,
CompressionMode.Decompress, true);
MemoryStream output = new MemoryStream();
byte[] buff = new byte[64];
int read = -1;
read = gzip.Read(buff, 0, buff.Length);
while(read > 0) {
output.Write(buff, 0, read);
read = gzip.Read(buff, 0, buff.Length);
}
gzip.Close();
return output.ToArray();
}
}
Due to the typical organizational bloat, requesting new hardware takes eons, and requesting hardware that would involve a complete rewire of our current setup would probably get some severe resistance from the engineering department.
I really need to come up with a software solution, because that's the only world I have some control over.
Yay for Enterprise :(
I've tried to find some of the products I had researched in the past that works just like StrangeLoops (but software based) It looks like they went all out of business, the only thing from my list that still up there is ScaleOut but they are specialized in session state caching.
I understand how hard it is to sell hardware solutions to senior management but it is always a good idea to at least get management to accept listening to the hardware's sales rep. I am much rather putting some hardware that will present me with an immediate solution because it allows me (or buy me some time) to get some other real job done.
I understand, it really sucks but the alternative is to change your code for optimization and that would maybe cost a lot more than getting an appliance.
Let me know if you find another software based solution.
I'm going to see if I can come up with a way to leverage our current State server to contain the viewstate in memory, I should be able to use the user session ID to keep things synched up between machines.
If I come up with a good solution, I'll remove any IP protected code and put it out for public use.
Oh no, red tape. Well this is going to be a tall order to fill. You mentioned here that you use a state server to serve your session state. How do you have this setup? Maybe you can do something similar here also?
Edit
Awh #Jonathan, you posted while I was typing this answer up. I think going that route could be promising. One thing is that it will definitely be memory intensive.
#Mike I don't think storing it in the session information will be a good idea, due to the memory intensiveness of viewstate and also how many times you will need to access the viewstate. SessionState is accessed a lot less often as the viewstate. I would keep the two separate.
I think the ultimate solution would be storing the ViewState on the client some how and maybe worth looking at. With Google Gears, this could be possible now.
Have you considered if you really need all that viewstate? For example, if you populate a datagrid from a database, all the data will be saved in viewstate by default. However, if the grid is just for presenting data, you dont need a form a all, and hence no viewstate.
You only need viewstate when there is some interaction with the user through postbacks, and even then the actual form data may be sufficient to recreate the view. You can selectively disable viewstate for controls on the page.
You have a very special UI if you actually need 100K of viewstate. If you reduce the viewstate to what is absolutely necessary, it might turn out to be the easiest and most scalable to keep the viewstate in the page.
I might have a simple solution for you in another post. It's a simple class to include in your app and a few lines of code in the asp.net page itself. If you combine it with a distributed caching system you could save a lot of dough as viewstate is large and costly. Microsoft’s velocity might be a good product to attach this method too. If you do use it and save a ton of money though I'd love a little mention for that. Also if you are unsure of anything let me know and I can talk with you in person.
Here is the link to my code. link text
If you are concerned with scaling then using the session token as a unique identifier or storing the state in session is more or less guaranteed to work in a web farm scenario.
Store the viewstate in a session object and use a distributed cache or state service to store session seperate from the we servers such as microsofts velocity.
I know this is a little stale, but I've been working for a couple of days on an opensource "virtual appliance" using squid and ecap to:
1.) gzip
2.) handle ssl
3.) replace viewstate with a token on request / response
4.) memcache for object caching
Anyways, it looks pretty promising. basically it would sit in front of the loadbalancers and should really help client performance. Doesnt seem to be very hard to set up either.
I blogged on this a while ago - the solution is at http://www.adverseconditionals.com/2008/06/storing-viewstate-in-memcached-ultimate.html
This lets you change the ViewState provider to one of your choice without having to change each of your Page classes, by using a custom PageAdapter. I stored the ViewState in memcached. In retrospect I think storing it in a database or on disk is better - we filled memcached up very quickly. Its a very low friction solution.
No need to buy or sell anything to eliminate viewstate bloating. Just need to extend the HiddenFieldPageStatePersister. The 100-200KB of ViewState will stay on the server and will send only a 62byte token on the page instead.
Here is a detailed article on how this can be done:
http://ashishnangla.com/2011/07/21/reducing-size-of-viewstate-in-asp-net-webforms-by-writing-a-custom-viewstate-provider-pagestatepersister-part-12/

Resources