Is it generally fast enough to make simple updates synchronously? For instance with a ASP.NET web app, if I change the person's name... will I have any issues just updating the index synchronously as part of the "Save" mechanism?
OR is the only safe way to have some other asynchronous process to make the index updates?
We do updates both synchronous and asynchronously depending on the kind of action the user is doing. We have implemented the synchronous indexing in a way where we use the asynchronous code and just waits for some time for its completion. We only wait for 2 seconds which means that if it takes longer then the user will not see the update but normally the user will.
We configured logging in a way so we would get notified whenever the "synchronous" indexing took longer than we waited to get an idea of how often it would happen. We hardly ever get over the 2 second limit.
If you are using full text session, then you don't need to update indexs explicitly. Full text session take care of indexing updated entity.
Related
Let's say i update multiple items in a loop and then call executeQueryAsync() on ClientContext class and this call returns error (failed callback is invoked). Can i be sure that not a sinle item was updated of all these i wanted to update? is there a chance that some of them will get updated and some of them will not? In other words, is this operation transactional? Thank you, i cannot find a single post about it.
I am asking about CSOM model not server solutions.
SharePoint handles its internal updates in a transactional method, so updating a document will actually be multiple calls to the DB that if one method fails, will roll back the other changes so nothing is half updated on a failure.
However, that is not made available to us as an external developer. If you create an update that updates 9 items within your executeQueryAsync call and it fails on #7, then the first 6 will not be rolled back. You are going to have to write code to handle the failures and if rolling back is important, then you will have to manually roll back the changes within your code.
First off I think I should link to this article which pretty much accomplishes what I what.
Here's my problem:
I had a user control on my site that will need to cache some data for at least 15 minutes and then pull the data again from the DB. The problem is the pull takes about 7-10 seconds to pull the result from the DB.
My thought is that I can set the cache to like two hours, then have a property in the cached object to say when the object was loaded (let's call this the LoadDate property). I would then have the code pull the cached object.
If it's null, I have no choice but to pull the data synchronously and then load my user control
If it's not null, I want to go ahead and load the data onto my user control from the cached object. I would then check the LoadDate property. If it's been 15 minutes or more, then set up an asynchronous process to reload the cache.
There needs to be a process to lock the cache object for this while it's updating
I need an if statement that says if the object is locked, then just forget about updating it. This would be for subsequent page loads by other users - as the first user would already be updating the cache and I don't want the update the cache over and over again; it should just be updated by the first call. Remember I'm already loading my user control before I even do the cache check
In the article I linked to before, the answer set up the cache updating perfectly, but I don't believe it is asynchronous. The question started with doing it asynchronously using the Page.RegisterAsyncTask. [Question 1] I can't seem to find any information on whether this would allow a asynchronous process to continue even if the user left the page?
[Question 2] Anybody have a good idea on how to do this? I have some code, but it has grown extremely long and still doesn't seem to be working correctly.
Question 1 (RegisterAsyncTask)
Very important thing to remember: from the client/user/browser perspective, this does NOT make the request asynchronous. If the Task you're registering takes 30 seconds to complete, the browser will still be waiting for 30+ seconds. The only thing RegisterAsyncTask does, is to free up your worker thread back to IIS for the duration of the asynchronous call. Don't get me wrong - this is still a valuable and important technique. However, for the user/browser making that particular request, it does not have a noticeable impact on response time.
Question 2
This may not be the best solution to your problem, but it's something I've used in the past that might help: when you add your item to the cache, specify an absoluteExpiration, and an onRemoveCallback. Create a function which gets fresh data, and puts it into the cache: this is the function you should pass as the onRemoveCallback. That way, every time your cached data expires, a callback occurs to put fresh data back into the cache; because the callback occurs as a result of a cache expiration event, there isn't a user request waiting for the 7-10 seconds it takes to cache fresh data.
This isn't a perfect design. Some caveats:
How to load the cache initially? Easiest way would be to call your cache-loader function from the Application_Start function.
Every 15 minutes, there will be a 7-10 second window where the cache is empty. Any requests during that time will need to go get the data themselves. Depending on your system usage patterns, that may be an acceptable small window, and only very few requests will occur during it.
The cache callback is not guaranteed to happen precisely when your cached item expires. If the system is under extremely heavy load, there could be a delay before the callback is triggered and the cache re-loaded. Again, depending on your system's usage, this may be a non-issue, or a significant concern.
Sorry I don't have a bulletproof answer for you (I'm going to keep an eye on this thread - maybe another SO'er does!). But as I said, I've used this approach with some success, and unless your system is extremely high-load, it should help address the question.
Edit: A slight twist on the above approach, based on OP's comment
You could cache a dummy value, and use it purely to trigger your refreshCachedData function. It's not especially elegant, but I've done it before, as well. :)
To elaborate: keep your actual cached data in the cache with a key of "MyData", no expiration, and no onRemoveCallback. Every time you cache fresh data in "MyData" you also add a dummy value to your cache: "MyDataRebuildTrigger", with a 15-minute expiration, and with an onRemoveCallback that rebuilds your actual cached data. That way, there's no 7-10 second gap when "MyData" is empty.
I have a situation in which I select an account and I want to bring back its details. This is a single UpdatePanel round trip and its quite quick.
In addition, I need to bring back some transactional information which is from a much bigger table and takes a couple of seconds for the query to come back.
Ideally, I would like to put this into a second update panel and update this additional information once it has been received, but after the first update panel has updated i.e. the user sees:
Change account
See account details (almost instant)
See transactional info (2 seconds later)
The only way I can think of doing this is to use javascript to cause a SECOND postback once the account details have been retrieved to get the transaction information. Is there a better way?
You cannot run two asynchronous postbacks using UpdatePanels at once.
(Otherwise, the ViewState would get messed up)
However, you can make two "raw" AJAX requests (without UpdatePanels) at once, if you're willing to process the results yourself.
My ASP.NET (3.5) app allows users to run some complex queries that can take up to 4 minutes to return results.
When I'm doing a long loops of code, I'll check Response.IsClientConnected() occasionally so I can end the page if the user closes their browser or hits the stop button.
But when querying SQL Server, my .NET code is blocked at the call to GetDataReader().
Is there a straightforward way to do GetDataReader() asynchronously so I can wait around for it but still check, say, every 5-10 seconds to see if the user is still connected, and bail on the database query if they've left?
Or is there some other alternative I'm not thinking of?
you can use the sqlcommand's BeginExecuteReader to get an asynch sqlreader
example
not 100% sure if thats what you need?
link showing cancel
I would suggest creating an object that has Start() and Wait() methods encapsulating your query. Internally use a ManualResetEvent to wait on the query completing. You'll use the Set() function when the query is complete, and the WaitOne() or WaitOne(TimeSpan ts) method to wait on it in your Wait() method.
I would like some advice from anyone experienced with implementing something like "pessimistic locking" in an asp.net application. This is the behavior I'm looking for:
User A opens order #313
User B attempts to open order #313 but is told that User A has had the order opened exclusively for X minutes.
Since I haven't implemented this functionality before, I have a few design questions:
What data should i attach to the order record? I'm considering:
LockOwnedBy
LockAcquiredTime
LockRefreshedTime
I would consider a record unlocked if the LockRefreshedTime < (Now - 10 min).
How do I guarantee that locks aren't held for longer than necessary but don't expire unexpectedly either?
I'm pretty comfortable with jQuery so approaches which make use of client script are welcome. This would be an internal web application so I can be rather liberal with my use of bandwidth/cycles. I'm also wondering if "pessimistic locking" is an appropriate term for this concept.
It sounds like you are most of the way there. I don't think you really need LockRefreshedTime though, it doesn't really add anything. You may just as well use the LockAcquiredTime to decide when a lock has become stale.
The other thing you will want to do is make sure you make use of transactions. You need to wrap the checking and setting of the lock within a database transaction, so that you don't end up with two users who think they have a valid lock.
If you have tasks that require gaining locks on more than one resource (i.e. more than one record of a given type or more than one type of record) then you need to apply the locks in the same order wherever you do the locking. Otherwise you can have a dead lock, where one bit of code has record A locked and is wanting to lock record B and another bit of code has B locked and is waiting for record A.
As to how you ensure locks aren't released unexpectedly. Make sure that if you have any long running process that could run longer than your lock timeout, that it refreshes its lock during its run.
The term "explicit locking" is also used to describe this time of locking.
I have done this manually.
Store the primary-key of the record to a lock table, and mark record
mode attribute to edit.
When another user tries to select this record, indicate the user's
ready only record.
Have a set-up maximum time for locking the records.
Refresh page data for locked records. While an user is allowed to
make changes, all other users are only allowed to check.
Lock table should have design similar to this:
User_ID, //who locked
Lock_start_Time,
Locked_Row_ID(Entity_ID), //this is primary key of the table of locked row.
Table_Name(Entity_Name) //table name of the locked row.
Remaining logic is something you have to figure out.
This is just an idea which I implemented 4 years ago on special request of a client. After that client no one has asked me again to do anything similar, so I haven't achieved any other method.