Does System.Web.Caching utilize an LRU algorithm? - asp.net

I was just working on the documentation for an open source project I created awhile back called WebCacheHelper. It's an abstraction on top of the existing Cache functionality in System.Web.Caching.
I'm having trouble finding the details of the algorithm used to purge the cache when the server runs low on memory.
I found this text on MSDN:
When the Web server hosting an ASP.NET application runs low on memory,
the Cache object selectively purges items to free system memory. When
an item is added to the cache, you can assign it a relative priority
compared to the other items stored in the cache. Items to which you
assign higher priority values are less likely to be deleted from the
cache when the server is processing a large number of requests,
whereas items to which you assign lower priority values are more
likely to be deleted.
This is still a little vague for my taste. I want to know what other factors are used to determine when to purge a cached object. Is it a combination of the last accessed time and the priority?

Let's take a look at the source code. Purging starts from TrimIfNecessary() method in CacheSingle class. Firstly it tries to remove all expired items in FlushExpiredItems() method of CacheExpires class. If that's not enough it starts iterating through "buckets" in CacheUsage.FlushUnderUsedItems(). Cache usage data/statistics divided into "buckets" according to CacheItemPriority and their statistics/LRU treated separately in each bucket. There're two iteration through buckets. The first iteration removes only newly added items (during last 10 seconds). The second one removes other items. It starts removing items from CacheItemPriority.Low bucket and its LRU items. It stops when removed enough otherwise continues to next LRU items and higher priority buckets. It doesn't touch CacheItemPriority.NotRemovable items as it doesn't add them into usage buckets.

Related

In SCORM 2004 (4th ed.) when are Available Children meant to be selected and randomized?

The pseudocode for the Select Children Process [SR.1] and Randomize Children Process [SR.2] heavily suggests these processes are meant to be run multiple times although for SR.1 no behavior is defined when selection is meant to occur onEachNewAttempt.
Since both the Sequencing Request Process [SB.2.12] and the Navigation Request Process [NB.2.1] expect the Available Children to be selected/randomized and the Content Delivery Environment Process [DB.2] only initializes the new attempt after a traversal over the various Available Children has already happened, it seems like the LMS is meant to run both of these processes during initialization of the activity tree itself before attempting to deliver the first activity or handle any requests.
However this doesn't explain when SR.2 is meant to be re-run. Since DB.2 creates the new attempt progress information by iterating over the activity path from the root to the specified activity, randomizing each activity's Available Children along the way would result in the position of the specified activity within the activity tree changing after selection, which seems unintuitive. Further more, if one were to attempt to implement onEachNewAttempt for SR.1 this could also cause the selected activity to vanish from the available activities (though this would explain why its behavior is undefined in SCORM).
My understanding would be that the Available Children are meant to be initialized to the list of all children followed by SR.1 and SR.2 being applied to all activities starting from the root and that SR.2 is then re-applied in DB.2 for every activity in the path despite this changing the order of activities. Is this correct or am I missing something?
Upon re-reading section 4.7 in SN-4-48 it seems that the answer is that the selection and randomization should indeed happen once at the start of the sequencing session (i.e. on initialization) and then again in the End Attempt Process [UP.4] (although for onEachNewAttempt it actually states "prior to the first attempt", which could also be read as referring to the delivery process, DB.2).
What makes this a bit awkward is that UP.4 is applied in many places including immediately prior to delivery (in DB.2), which still means randomization could occur after an activity has already been selected and that randomization could happen multiple times in between a sequencing request and delivery.

Do consistent-read scans in DynamoDB hide changes made after the scan starts?

Given a Scan operation with ConsistentRead=true and a large resultset that spans many pages (such that follow-up requests are required to fetch subsequent pages): what happens if an item is updated after the scan starts but before the relevant page for that item is returned to us (when paginating through the scan results)?
When we eventually reach that page, will we see the updated item, or will we see the version of the item from when the scan started? Or is the behaviour unpredictable?
And: same question again, but for deletes?
Consistent reads refers to consistency with respect to an individual item -- not the whole table.
When you initiate a scan it, with consistent reads, every item read will reflect it's up-to-date state at time of the request but that doesn't mean that the collection of items you get as a result of the scan represents a point in time snapshot when you initiated the scan. New items could be added while the scan is in progress and depending on where their key falls in the key-space they may or may not be included.
Same for deleted items.
You can also set up a simple test for this yourself by deliberately slowing down your scan and performing updates while you scan.

Index overhead during updates when the property value is unchanged

If majority of the indexed properties are not changed during an update to an entity, will there be any difference in performance as compared to the indexed properties having changed? I am trying to understand what kind of hotspotting can happen in an app that has relatively few inserts but a lot of updates where the updates don't change majority of the built-in indexed properties.
You shouldn't have any issues with performance by doing updates that don't affect the index.
Hottspotting may happen if you have high read/write rates to a narrow key range.
On an update intensive application you have to be careful not to update single entities more than once per second because that introduces higher latency

Oracle sequence cache aging too often

my asp.net application uses some sequences to generate tables primary keys. Db administrators have set the cache size to 20. Now the application is under test and a few records are added daily (say 4 for each user test session).
I've found that new test session records always use new cache portions as if the preavious day cached numbers had expired, losing tenth of keys everyday. I'd like to understand if it's due to some mistake i might have made in my application (disposing of tableadapters or whatever) or if it's the usual behaviour. There are programming best practices to take into account when handling oracle sequences ?
Since the application will not have to bear an heavy load of work (say 20-40 new records at day), i was tinking if it might be the case to set a smaller cache size or none at all.
Does sequence cache resizing implies the reset of current index ?
thank you in advance for any hint
The answer from Justin Cave in this thread might be interesting for you:
http://forums.oracle.com/forums/thread.jspa?threadID=640623
In a nutshell: if the sequence is not accessed frequently enough but you have a a lot of "traffic" in the library cache, then the sequence might be aged out and removed from the cache. In that case the pre-allocated values are lost.
If that happens very frequently to you, it seems that your sequence is not used very often.
I guess that reducing the cache size (or completely disabling it) will not have a noticable impact on performance in your case (also when taking your statement of 20-40 new records a day into account)
Oracle Sequences are not gap-free. Reducing the Cache size will reduce the gaps... but you will still have gaps.
The sequence is not associated to the table by the database, but by your code (via the nextval on the insert via trigger/sql/pkg api) -- on that note you may use the same sequence over multiple tables (it is not like sql server's identity where it is associated to the column/ table)
thus changing the sequence will have no impact on the indexes.
You would just need to make sure if you drop the sequence and restart it, you 'reseed' to the +1 of the current value (e.g. create sequence seqer start with 125 nocache;)
, but
If your application requires a
gap-free set of numbers, then you
cannot use Oracle sequences. You must
serialize activities in the database
using your own developed code.
but be forewarned, you may increase disk IO and possible transaction locking if you choose not to use sequences.
The sequence generator is useful in
multiuser environments for generating
unique numbers without the overhead of
disk I/O or transaction locking.
to reiterate a_horse_with_no_name's comments, what is the issue with gaps in the id?
Edit
also have a look at the caching logic you should use located here:
http://download.oracle.com/docs/cd/E11882_01/server.112/e17120/views002.htm#i1007824
If you are using the sequence for PKs and not to enforce some application logic then you shouldn't worry about gaps. However, if there is some application logic tied to sequential sequence values, you will have holes if you use sequence caching and do not have a busy system. Sequence cache values can be aged out of the library cache.
You say that your system is not very busy, in this case alter your sequence to no cache. You are in a position of taking a negligible performance hit to fix a logic issue so you might as well.
As people mentioned: Gaps shouldn't be a problem, so if you are requiring no gaps you are doing something wrong. (But I don't think this is what you want).
Reducing the cache should reduce the number and decrease the performance of the sequence especially with concurrent access to it. (which shouldn't be a problem in your use case).
Changing the sequence using the alter sequence statement (http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_2011.htm) should not reset the current/next val of the sequence.

Is there anyway to monitor one single (class) of object in terms of cache?

I am trying to determine which implementation of the data structure would be best for the web application. Basically, I will maintain one set of "State" class for each unique user, and the State will be cached for some time when the user login, and after the non-sliding period, the state is saved to the db. So in order to balance the db load and the iis memory, I have to determine what is the best (expected) timeout for the cache.
My question is, how to monitor the particular cache activity for one set of object? I tried perfmon, and it gives roughly the % of total memory limit, but no idea on size or so (maybe even better, I could get a list of all cached objects and also the size and other performance issue data).
One last thing, I expect the program is going to handle 100,000+ cached user and each of them may do a request in about 10s-60s. So performance does matters to me.
What exactely are you trying to measure here? If you just want to get the size of your in-memory State instances at any given time, you can use an application-level counter and add/substract every time you create/remove an instance of State. So you know your State size, you know how many State instances you have. But if you already count on getting 100.000+ users each requesting at least once / minute you can actually do the math.

Resources