Will my shared variables loose value? (asp.net vb) - asp.net

I have a class includes.vb that holds some variables (sharing them with other pages) like:
Public Shared pageid As Integer = 0
I then have a function that does some work with these variables returning them with values;
Return pageid
When I step through the code, the variables have values (while stepping through the function), but when they are returned to the page, they come back null.
Do they loose value everytime a page is loaded?
Can you suggest an alternative method?
Thanks a lot.

You should use probably Session variables.
Session("PageID") = 0
and access it every time you need.
Not a best practice but if you want to be even stricter, you can use specific application variable for every session so that if the user returns after a day to the website, it still not lost (as long as you haven't done an iisreset).
To overcome iisreset will be an even bigger overkill, you can save the value to a file/DB and retrieve it everytime you want. (Please don't do that!!)
Maybe this can explain further:
http://codeforeternity.com/blogs/technology/archive/2007/12/19/handling-asp-net-session-variables-efficiently.aspx

It is not a very good idea to use shared variables in web projects: first off, every time you do an "iisreset" or recycle your application pool, these variables are reset. Next thing, these variables are not per-user, but per-process, and (I believe) are not guaranteed to be thread safe, so one thread may change the value of a variable and then another one reset the value to something different.
Judging from the variable name "PageID", I think you are trying to track the last page user has visited. If this is the case, then session variable scope is a better solution for you. See tutorial here: http://msdn.microsoft.com/en-us/library/ms178581.aspx

Related

MDriven ECO_ID duplicates

We appear to have a problem with MDriven generating the same ECO_ID for multiple objects. For the most part it seems to happen in conjunction with unexpected process shutdowns and/or server shutdowns, but it does also happen during normal activity.
Our system consists of one ASP.NET application and one WinForms application. The ASP.NET app is setup in IIS to use a single worker process. We have a mixture of WebForms and MVC, including ApiControllers. We're using a rather old version of the ECO packages: 7.0.0.10021. We're on VS 2017, target framework is 4.7.1.
We have it configured to use 64 bit integers for object id:s. Database is Firebird. SQL configuration is set to use ReadCommitted transaction isolation.
As far as I can tell we have configured EcoSpaceStrategyHandler with EcoSpaceStrategyHandler.SessionStateMode.Never, which should mean that EcoSpaces are not reused at all, right? (Why would I even use EcoSpaceStrategyHandler in this case, instead of just creating EcoSpace normally with the new keyword?)
We have created MasterController : Controller and MasterApiController : ApiController classes that we use for all our controllers. These have a EcoSpace property that simply does this:
if (ecoSpace == null)
{
if (ecoSpaceStrategyHandler == null)
ecoSpaceStrategyHandler = new EcoSpaceStrategyHandler(
EcoSpaceStrategyHandler.SessionStateMode.Never,
typeof(DiamondsEcoSpace),
null,
false
);
ecoSpace = (DiamondsEcoSpace)ecoSpaceStrategyHandler.GetEcoSpace();
}
return ecoSpace;
I.e. if no strategy handler has been created, create one specifying no pooling and no session state persisting of eco spaces. Then, if no ecospace has been fetched, fetch one from the strategy handler. Return the ecospace. Is this an acceptable approach? Why would it be better than simply doing this:
if (ecoSpace = null)
ecoSpace = new DiamondsEcoSpace();
return ecoSpace;
In aspx we have a master page that has an EcoSpaceManager. It has been configured to use a pool but SessionStateMode is Never. It has EnableViewState set to true. Is this acceptable? Does it mean that EcoSpaces will be pooled but inactivated between round trips?
It is possible that we receive multiple incoming API calls in tight succession, so that one API call hasn't been completed before the next one comes in. I assume that this means that multiple instances of MasterApiController can execute simultaneously but in separate threads. There may of course also be MasterController instances executing MVC requests and also the WinForms app may be running some batch job or other.
But as far as I understand id reservation is made at the beginning of any UpdateDatabase call, in this way:
update "ECO_ID" set "BOLD_ID" = "BOLD_ID" + :N;
select "BOLD_ID" from "ECO_ID";
If the returned value is K, this will reserve N new id:s ranging from K - N to K - 1. Using ReadCommitted transactions everywhere should ensure that the update locks the id data row, forcing any concurrent save operations to wait, then fetches the update result without interference from other transactions, then commits. At that point any other pending save operation can proceed with its own id reservation. I fail to see how this could result in the same ID being used for multiple objects.
I should note that it does seem like it sometimes produces id duplicates within one single UpdateDatabase, i.e. when saving a set of new related objects, some of them end up with the same id. I haven't really confirmed this though.
Any ideas what might be going on here? What should I look for?
The issue is most likely that you use ReadCommitted isolation.
This allows for 2 systems to simultaneously start a transaction, read the current value, increase the batch, and then save after each other.
You must use Serializable isolation for key generation; ie only read things not currently in a write operation.
MDriven use 2 settings for isolation level UpdateIsolationLevel and FetchIsolationLevel.
Set your UpdateIsolationLevel to Serializable

Dictionary Behaves Strangely During Databinding

I was trying to do a little data access optimization, and I ran into a situation where a dictionary appeared to get out of sync in a way that should be impossible, unless I'm somehow getting into a multithreaded situation without knowing it.
One column of GridLabels binds to a property that does data access -- which is a tad expensive. However, multiple rows end up making the same call, so I should be able to head any problems off at the pass by doing a little caching.
However, elsewhere in the app, this same code is called in ways where caching would not be appropriate, I needed a way to enable caching on demand. So my databinding code looks like this:
OrderLabelAPI.MultiSyringeCacheEnabled = True
Me.GridLabels.DataBind()
OrderLabelAPI.MultiSyringeCacheEnabled = False
And the expensive call where the caching happens looks like this:
Private Shared MultiSyringeCache As New Dictionary(Of Integer, Boolean)
Private Shared m_MultiSyringeCacheEnabled As Boolean = False
Public Shared Function IsMultiSyringe(orderLabelID As Integer) As Boolean
If m_MultiSyringeCacheEnabled Then
'Since this can get hit a lot, we cache the values into a dictionary. Obviously,
'it goes away after each request. And the cache is disabled by default.
If Not MultiSyringeCache.ContainsKey(orderLabelID) Then
MultiSyringeCache.Add(orderLabelID, DoIsMultiSyringe(orderLabelID))
End If
Return MultiSyringeCache(orderLabelID)
Else
Return DoIsMultiSyringe(orderLabelID)
End If
End Function
And here is the MultiSyringeCacheEnabled property:
Public Shared Property MultiSyringeCacheEnabled As Boolean
Get
Return m_MultiSyringeCacheEnabled
End Get
Set(value As Boolean)
ClearMultiSyringeCache()
m_MultiSyringeCacheEnabled = value
End Set
End Property
Very, very rarely (unreproducably rare...) I will get the following exception: The given key was not present in the dictionary.
If you look closely at the caching code, that's impossible since the first thing it does is ensure that the key exists. If DoIsMultiSyringe tampered with the dictionary (either explicitly or by setting MultiSyringeCacheEnabled), that could also cause problems, and for awhile I assumed this had to be the culprit. But it isn't. I've been over the code very carefully several times. I would post it here but it gets into a deeper object graph than would be appropriate.
So. My question is, does datagridview databinding actually get into some kind of zany multithreaded situation that is causing the dictionary to seize? Am I missing some aspect of shared members?
I've actually gone ahead and yanked this code from the project, but I want to understand what I'm missing. Thanks!
Since this is ASP.NET, you have an implicit multithreaded scenario. You are using a shared variable (see What is the use of a shared variable in VB.NET?), which is (as the keyword implies) "shared" across multiple threads (from different people visiting the site).
You can very easily have a scenario where one visitor's thread gets to here:
'Since this can get hit a lot, we cache the values into a dictionary. Obviously,
'it goes away after each request. And the cache is disabled by default.
If Not MultiSyringeCache.ContainsKey(orderLabelID) Then
MultiSyringeCache.Add(orderLabelID, DoIsMultiSyringe(orderLabelID))
End If
' My thread is right here, when you visit the site
Return MultiSyringeCache(orderLabelID)
and then your thread comes in here and supercedes my thread:
Set(value As Boolean)
ClearMultiSyringeCache()
m_MultiSyringeCacheEnabled = value
End Set
Then my thread is going to try to read a value from the dictionary after you've cleared it.
That said, I am not sure what performance benefit you expect from a "cache" that you clear with every request. It looks like you should simply not make this variable shared- make it an instance variable- and any user request accessing it will have their own copy.

Make a final call to the Database when user leaves website (ASPX)?

I have a system set up to lock certain content in a database table so only one user can edit that content at a time. Easy enough and that part is working fine. But now I'm at a road block of how to send a request to "unlock" the content. I have the stored procedure to unlock the content, but how/where would I call it when the user just closes their browser?
You also can't know when the user turns off his computer. You have to do it the other way around.
Require that the lock be renewed periodically. Only the web site would do the periodic renewal. If the user stops using the web site, then the lock expires.
Otherwise, require the user to explicitly unlock the content. Other users who want to edit the content can then go yell at the first user when they can't do their jobs. Not a technological solution, but still a good one. Shame works.
The best thing you can really do is add something to your Session_End in your global.asax. Unfortunately, this won't fire until the session times out.
When the user clicks the "X" in their browser, there isn't anyway to guarantee the browser will send you anything back.
A quick note on the Session_End approaches. If you use this method, then you have to ensure
That sessionstate is InProc, eg. add something like this to your Web.config
<sessionState mode="InProc" timeout="timeout_in_minutes"/>
Make sure that you've setup IIS as to not recycle worker processes during normal operation (see for instance this blog post).
Edit:
Not directly answering the question directly, but another approach would be to use Optimistic concurrency control on the data in question.
There is such event as "user closes browser".
Nevertheless, I can think of two workarounds:
Use Javascript/Ajax to permanently
(lets say every 10 seconds) call a
method in your page. The DateTime of
your last query needs to be stored
somewhere. Now you write a windows
service that checks every second
which session are timed out. Perform
your custom action there.
Use the global.asax Session_End()
-Event. (cannot be used with every SessionState, look up for which ones
it is usable)
Trying to leave a stackoverflow answer page pops up an "are you sure" dialog. Perhaps during the on-page-leave event that SO uses (or however SO does this), you can send a final request with an XmlHttpRequest object. This won't cover if the browser process closes unexpectedly (use session_onend for that), but it will at least send the "I'm closed" event earlier
I think your one stored procedure can do the locking and unlocking (used with "Select #strNewMax As NewMax")...
Here is an example from a system I have:
Declare #strNewMax Char
Select #strNewMax = 'N'
BEGIN TRANSACTION
/* Lock only the rows for this Item ID, and hold those locks throughout the transaction. */
If #BidAmount > (Select Max(AB_Bid_AMT) from AuctionBid With(updlock, holdlock) Where AB_AI_ID = #AuctionItemId)
Begin
Insert Into AuctionBid (AB_AI_ID, AB_Bid_AMT, AB_Emp_ID, AB_Entry_DTM)
Select #AuctionItemId, #BidAmount, #EmployeeId, GetDate()
Select #strNewMax = 'Y'
End
COMMIT TRANSACTION
Select #strNewMax As NewMax
This will insert a record as the next highest bid, all while locking the entire table, so no other bids are processed at the same time. It will return either a 'Y' or 'N' depending on if it worked or not.
Maybe you can take this and adjust it to fit your application.

Need suggestion for ASP.Net in-memory queue

I've a requirement of creating a HttpHandler that will serve an image file (simple static file) and also it'll insert a record in the SQL Server table. (e.g http://site/some.img, where some.img being a HttpHandler) I need an in-memory object (like Generic List object) that I can add items to on each request (I also have to consider a few hundreds or thousands requests per second) and I should be able unload this in-memory object to sql table using SqlBulkCopy.
List --> DataTable --> SqlBulkCopy
I thought of using the Cache object. Create a Generic List object and save it in the HttpContext.Cache and insert every time a new Item to it. This will NOT work as the CacheItemRemovedCallback would fire right away when the HttpHandler tries to add a new item. I can't use Cache object as in-memory queue.
Anybody can suggest anything? Would I be able to scale in the future if the load is more?
Why would CacheItemRemovedCalledback fire when you ADD something to the queue? That doesn't make sense to me... Even if that does fire, there's no requirement to do anything here. Perhaps I am misunderstanding your requirements?
I have quite successfully used the Cache object in precisely this manner. That is what it's designed for and it scales pretty well. I stored a Hashtable which was accessed on every app page request and updated/cleared as needed.
Option two... do you really need the queue? SQL Server will scale pretty well also if you just want to write directly into the DB. Use a shared connection object and/or connection pooling.
How about just using the Generic List to store requests and using different thread to do the SqlBulkCopy?
This way storing requests in the list won't block the response for too long, and background thread will be able to update the Sql on it's own time, each 5 min so.
you can even base the background thread on the Cache mechanism by performing the work on CacheItemRemovedCallback.
Just insert some object with remove time of 5 min and reinsert it at the end of the processing work.
Thanks Alex & Bryan for your suggestions.
Bryan: When I try to replace the List object in the Cache for the second request (now, count should be 2), the CacheItemRemovedCalledback gets fire as I'm replacing the current Cache object with the new one. Initially, I also thought this is weird behavior so I gotta look deeper into it.
Also, for the second suggestion, I will try to insert record (with the Cached SqlConnection object) and see what performance I get when I do the stress test. I doubt I'll be getting fantastic numbers as it's I/O operation.
I'll keep digging on my side for an optimal solution meanwhile with your suggestions.
You can create a conditional requirement within the callback to ensure you are working on a cache entry that has been hit from an expiration instead of a remove/replace (in VB since I had it handy):
Private Shared Sub CacheRemovalCallbackFunction(ByVal cacheKey As String, ByVal cacheObject As Object, ByVal removalReason As Web.Caching.CacheItemRemovedReason)
Select Case removalReason
Case Web.Caching.CacheItemRemovedReason.Expired, Web.Caching.CacheItemRemovedReason.DependencyChanged, Web.Caching.CacheItemRemovedReason.Underused
' By leaving off Web.Caching.CacheItemRemovedReason.Removed, this will exclude items that are replaced or removed explicitly (Cache.Remove) '
End Select
End Sub
Edit Here it is in C# if you need it:
private static void CacheRemovalCallbackFunction(string cacheKey, object cacheObject, System.Web.Caching.CacheItemRemovedReason removalReason)
{
switch(removalReason)
{
case System.Web.Caching.CacheItemRemovedReason.DependencyChanged:
case System.Web.Caching.CacheItemRemovedReason.Expired:
case System.Web.Caching.CacheItemRemovedReason.Underused:
// This excludes the option System.Web.Caching.CacheItemRemovedReason.Removed, which is triggered when you overwrite a cache item or remove it explicitly (e.g., HttpRuntime.Cache.Remove(key))
break;
}
}
To expand on my previous comment... I get the picture you are thinking about the cache incorrectly. If you have an object stored in the Cache, say a Hashtable, any update/storage into that Hashtable will be persisted without you explicitly modifying the contents of the Cache. You only need to add the Hashtable to the Cache once, either at application startup or on the first request.
If you are worried about the bulkcopy and page request updates happening simultaneously, then I suggest you simple have TWO cached lists. Have one be the list which is updated as page requests come in, and one list for the bulk copy operation. When one bulk copy is finished, swap the lists and repeat. This is similar to double-buffering video RAM for video games or video apps.

Page View Counter like on StackOverFlow

What is the best way to implement the page view counter like the ones they have here on the site where each question has a "Views" counter?
Factoring in Performance and Scalability issues.
I've made two observations on the stackoverflow views counter:
There's a link element in the header that handles triggering the count update. For this question, the markup looks like this:
<link href="/questions/246919/increment-view-count" type="text/css" rel="stylesheet" />
I imagine you could hit that url to update the viewcount without ever actually viewing the page, but I haven't tried it.
I had a uservoice ticket, where the response from Jeff indicated that views are not incremented from the same ip twice in a row.
The counter i optimized works like this:
UPDATE page_views SET counter = counter + 1 WHERE page_id = x
if (affected_rows == 0 ) {
INSERT INTO page_views (page_id, counter) VALUES (x, 1)
}
This way you run 2 query for the first view, the other views require only 1 query.
An efficient way may be :
Store your counters in the Application object, you may persist it to file/DB periodically and on application close.
Instead of making a database call everytime the database is hit, I would increment a counter using a cache object and depending on how many visits you get to your site every day you have send the page hits to the database every 100th hit to the site. This is waay faster then updating the database on every single hit.
Or another solution is analyzing the IIS log file and updating hits every 30min through a windows service. This is what I have implemented and it work wonders.
You can implement an IHttpHandler to do that.
I'm a fan of #Guillaume's style of implementation. I use a transparent GIF handler and in-memory queues to batch-up sets of changes that are then periodically flushed using a seperate thread created in global.asax.
The handler implements IHttpHandler, processes the request parameters e.g. page id, language etc., updates the queue, then response.writes the transparent GIF.
By moving persistent changes to a seperate thread than the user-request you also deal much better with potential serialization issues from running multiple servers etc.
Of course you could just pay someone else to do the work too e.g. with transparent gifs.
For me the best way is to have a field in the question table and update it when the question is accessed
UPDATE Questions SET views = views + 1 WHERE QuestionID = x
Application Object: IMO is not scalable because the may end with lots of memory consumption as more questions are accessed.
Page_views table: no need to, you have to do a costly join after

Resources