While debugging a classic ASP application (and learning about classic ASP at the same time) I've encountered the following
Application("Something") = "some value"
and elsewhere in the code this value gets used thus:
someObj.Property = Session("Something")
How does the Application object relate to Session?
A Session variable is linked to a user. An Application variable is shared between all users.
Application is a handy vault for storing things you want to persist but you can't guarantee they'll always be there. So think low-end caching, short-term variable storage, etc.
In this context with these definitions, they have very little to do with each other except that getting and setting variables is roughly the same for each.
Note: there can be concurrency issues when using Application (because you could easily have more than one user hitting something that reads or writes to it) so I suggest you use Application.Lock before you write and Application.Unlock after you're done. This only really applies to writing.
Note 2: I'm not sure if it automatically unlocks after the request is done (that would be sensible) but I wouldn't trust it to. Make sure that any part of the application that could conceivable explode isn't within a lock otherwise you might face locking other users out.
Note 3: In that same vein, don't put things that take a long time to process inside a lock, only the bit where you write the data. If you do something that takes 10 seconds while in a lock, you lock everybody else out.
Related
i was going through Redis RDB persistence. I having some doubts regarding RDB persistence related to its disadvantage.
Understanding So far:
We should use rdb persistence when we need to save the snapshot of dataset currently in memory at some regular interval.
I can understand that in this way we can lose some data in case of server break down. But another disadvantage that i can't understand is how fork can be time consuming when persisting large dataset using rdb.
Quoting from Documentation
RDB needs to fork() often in order to persist on disk using a child
process. Fork() can be time consuming if the dataset is big, and may
result in Redis to stop serving clients for some millisecond or even
for one second if the dataset is very big and the CPU performance not
great. AOF also needs to fork() but you can tune how often you want to
rewrite your logs without any trade-off on durability.
I know how fork works as per my knowledge When parent process forks it create a new Child process and we can allow some code that child process will execute based on its pid or we can provide it some new executable that it will work on using exec() system call.
but things that i don't understand how it will be heavy task when size of dataset is larger?
I think i know the answer but i m not sure about that
Quoted from this link https://www.bottomupcs.com/fork_and_exec.xhtml
When a process calls fork then
the operating system will create a new process that is exactly the same as the parent process. This means all the state that was talked about previously is copied, including open files, register state and all memory allocations, which includes the program code.
As per above statement whole dataset of redis will be copied to child.
Am i understanding right?
When standard fork is called with copy-on-write the OS must still copy all the page table entries, which can take time time if you have small 4k pages and a huge dataset, this is what makes the actual fork() time slow.
You can also find a lot of time and memory is required if your dataset is changing a lot in a sparse way, as copy-on-write semantics triggers the actual memory pages to be copied as changes are made to the original. Redis also performs incremental rehashing and maintains expiry etc. so an instance that is more active will typically take longer to save to disk.
More reading:
Faster forking of large processes on Linux?
http://kirkwylie.blogspot.co.uk/2008/11/linux-fork-performance-redux-large.html
I know the differences between SessionState and ViewState:
SessionState persists for the whole session where as ViewState is for one the same page.
SesssionState stays in the server but VewState travels between client and server
Now taking the above into account, if I have plenty of variables(which means that much bandwidth) that I need to keep through postbacks which one should I pick? I stuck in the middle because:
I know that I'm going to use those variables only in one page and ViewState is appropriate for this case
On the other, it seems it's going to take much bandwidth as the variables are quite a few.
Unless you are speaking of a few thousand variables, there is nothing to worry about.
Most asp.net controls store a lot of their state variables in the ViewState.
You can easily use a page performance tool to see the increase in your page size after you put the variables in ViewState. In most cases it is not something to worry about.
Variables usually do not take much space would be in kbs or even less, putting data in session un-necessarily could degrade the performance of server as the number of clients increase the load on server machine is multiplied. On the other hand view state does not hold space on server and could save memory for other useful operations.
for a pseudo function like
void transaction(Account from, Account to, double amount){
Semaphore lock1, lock2;
lock1 = getLock(from);
lock2 = getLock(to)
wait(lock1);
wait(lock2);
withdraw(from, amount);
deposit(to, amount);
signal(lock2);
signal(lock1);
}
deadlock happens if you run transaction(A,B,50) transaction(B,A,10)
how can this be prevented?
would this work?
A simple deadlock prevention strategy when handling locks is to have strict order on the locks in the application and always grab the locks according to this order. Assuming all accounts have a number, you could change your logic to always grab the lock for the account with the lowest account number first. Then grab the lock for the one with the highest number.
Another strategy for preventing deadlocks is to reduce the number of locks. In this case it might be better to have one lock that locks all accounts. It would definitely make the lock structure far more simple. If the application shows performance problems under heavy load and profiling shows that lock congestion is the problem - then it is time to invent a more fine grained locking strategy.
By making the entire transaction a critical section? That's only one possible solution, at least.
I have a feeling this is homework of some sort, because it's very similar to the dining philosophers problem based on the example code you give. (Multiple solutions to the problem are available at the link provided, just so you know. Check them out if you want a better understanding of the concepts.)
In the Seam Reference Guide, one can find this paragraph:
We can set a sensible default for the concurrent request timeout (in ms) in components.xml:
<core:manager concurrent-request-timeout="500" />
However, we found that 500 ms is not nearly enough time for most of the cases we had to deal with, especially with the severe restriction seam places on conversation access.
In our application we have a combination of page scoped ajax requests (triggered by various user actions), some global scoped polling notification logic (part of the header, so included in every page) and regular links that invoke actions and/or navigate to other pages.
Therefore, we get the dreaded concurrent access to conversation exception way too often, even without any significant load on the site.
After researching the options for quite a bit, we ended up bumping this value to several seconds (we're debating whether to bump it up to 10s), as none of the recommended solutions seemed able to solve our issue completely (even forcing a global queue for all the ajax requests would still leave us exposed to a user deciding to click a link right when one of our polling calls was in progress). And we'd much rather have the users wait for a second or two instead of getting an error page just because they clicked a link at the wrong moment.
And now to the question: is there something obvious we're missing (like a way to allow concurrent access to conversations and taking care of the needed locking ourselves, for instance :)? How do people solve this problem (ajax requests mixed with user driven interaction) in seam? Disabling all the links on the page while ajax requests are in progress (as suggested by one blog page) is really not a viable option.
Any other suggestions?
TIA,
Andrei
We use 60000 or 120000 (1-2 minutes). Concurrent-request-timeout is designed to avoid deadlocks. Historically we have far more problems with timeouts than deadlocks. A better approach is to use a client-side queue (<a4j:ajaxQueue> if using RichFaces) to serialize and remove duplicate requests as much as possible, then set the timeout high enough to avoid any remaining problems.
There are many serious issues resulting from Seam's concurrent request timeouts:
The issue is the last request gets the ConcurrentRequestTimeoutException. If the user double-clicks or reloads the page, only the last request matters -- why should he get an error?
Usually the ConcurrentRequestTimeoutException is suppressed, and only secondary NullPointerExceptions and #In injection failures are shown, making debugging difficult.
Seam 2.2.1 has a severe problem where transactions, ThreadLocals, and locks may leak after a timeout occurs, especially when used with <spring:spring-transaction/>. Look at SeamPhaseListener.afterRestoreView: there's no finally block to clean up after restoreConversation fails!
In my opinion there are many poor aspects to this design, so it's best to use a much higher timeout and try to avoid the issues.
This is what we have and it works fine for us:
<core:manager concurrent-request-timeout="5000"
conversation-timeout="120000" conversation-id-parameter="cid"
parent-conversation-id-parameter="pid" />
We also use a much higher value for the concurrent-request-timeout.
At least for duplicate events you can use settings in the a4j components to filter and delay them with eventsQueue, requestDelay and ignoreDupResponses=”true”.
(Last point http://docs.jboss.org/seam/2.0.1.GA/reference/en/html/conversations.html )
Can you analyse which types of request are taking a long time? Is there a particular type which you could reduce the request time by doing the "work" asynchronously and getting the update back in your poll?
In my opinion, ajax requests should always complete fairly quickly, then you can calculate a max concurrent request time by (request time * max number of requests likely to be initiated)
When you add an item to the System.Web.Caching.Cache with an absolute expiration date, as in the following example, how does Asp.Net behave? Does it:
Simply mark the item as expired, then execute the CacheItemRemovedCallback on the next access attempt?
Remove the item from the cache and execute the CacheItemRemovedCallback immediately?
HttpRuntime.Cache.Insert(key,
new object(),
null,
DateTime.Now.AddSeconds(seconds),
Cache.NoSlidingExpiration,
CacheItemPriority.NotRemovable,
OnCacheRemove);
MSDN appears to indicate that it happens immediately. For example, the "Expiration" section of the "ASP.NET Caching Overview" says "ASP.NET automatically removes items from the cache when they expire." Similarly, the example from the topic "How to: Notify an Application When an Item Is Removed from the Cache" says "If more than 15 seconds elapses between calls to GetReport [a method in the example], ASP.NET removes the report from the cache."
Still, neither of these is unambiguous. They don't say "the callback is executed immediately" and I could conceive of how their writers might have thought option 1 above counts as 'removing' an item. So I did a quick and dirty test, and lo, it appears to be executing immediately - I get regular sixty-second callbacks even when no one is accessing my site.
Nonetheless, my test was quick and dirty, and in the comments to my answer to Is there a way to run a process every day in a .Net web application without writing a windows service or SQL server jobs, someone has suggested that Asp.Net actually defers removal and execution of the callback until something tries to access the cache again.
Can anyone settle this authoritatively or is this just considered an implementation detail?
Hurray for Reflector!
Expired cache items are actually removed (and callbacks called) when either:
1) Something tries to access the cache item.
2) The ExpiresBucket.FlushExpiredItems method runs and gets to item. This method is hard-coded to execute every 20 seconds (the accepted answer to the StackOverflow question Changing frequency of ASP.NET cache item expiration corroborates my read of this code via Reflector). However, this has needs additional qualification (for which read on).
Asp.Net maintains one cache for each CPU on the server (I'm not sure if it these represent logical or physical CPUs); each of these maintains a CacheExpires instance that has a corresponding Timer that calls its FlushExpiredItems method every twenty seconds.
This method iterates over another collection of 'buckets' of cache expiration data (an array of ExpiresBucket instances) serially, calling each bucket's FlushExpiredItems method in turn.
This method (ExpiresBucket.FlushExpiredItems) first iterates all the cache items in the bucket and if an item is expired, marks it expired. Then (I'm grossly simplifying here) it iterates the items it has marked expired and removes them, executing the CacheItemRemovedCallback (actually, it calls CacheSingle.Remove, which calls CacheInternal.DoRemove, then CacheSingle.UpdateCache, then CacheEntry.Close, which actually calls the callback).
All of that happens serially, so there's a chance something could block the entire process and hold things up (and push the cache item's expiration back from its specified expiration time).
However, at this temporal resolution, with a minimum expiration interval of twenty seconds, the only part of the process that could block for a significant length of time is the execution of the CacheItemRemovedCallbacks. Any one of these could conceivably block a given Timer's FlushExpiredItems thread indefinitely. (Though twenty seconds later, the Timer would spawn another FlushExpiredItems thread.)
To summarize, Asp.Net does not guarantee that it will execute callbacks at the specified time, but it will do so under some conditions. As long as the expiration intervals are more than twenty seconds apart, and as long as the cache doesn't have to execute time-consuming CacheItemRemovedCallbacks (globally - any callbacks could potentially interfere with any others), it can execute expiration callbacks on schedule. That will be good enough for some applications, but fall short for others.
Expired items aren't immediately removed from the cache, they're just marked as expired. You don't get a callback until a cache miss. I ran into this back in the ASP.NET 1.1 days, and it hasn't changed.
There may be cases where expired items are removed immediately - such as if there's low memory and high CPU - but you can't count on it.
I usually use a timer that reloads the cache on a regular basis.