I would like to ask is there any way to achieve this functionality:
I have an Ajax enabled user web site (tree view on the left side, and content on the right side). When users selects a node on the left side, I need to store the last selected node in database. However the user can change the node quite often (even 5 or 10 times a minute).
So I would like to add it to the application cache, like:
cache["userName"]=selectedNodeID;
Is there any caching framework that would: perform certain action on a cache item if it wasn't changed for let's say 5 minutes? It would allow me to store last selected node id after 5 minutes from last change. If the user changes the node before this time passes - I would just update cache value and reset timer.
Any hints?
You could use the callback mechanism for when the cache item is removed from the cache. If you set the cache duration to 5 minutes, then you could check in your callback method whether the item has already been saved - if it hasn't then you can push it to the database. In either case, you can put the item straight back in the cache for another 5 minutes.
e.g. you would have something like this as the last parameter of your Cache.Insert method:
new CacheItemRemovedCallback (SaveItemToDatabaseAndReturnItToTheCache)
and then you would implement the callback method accordingly.
more info here: http://weblogs.asp.net/kwarren/archive/2004/05/20/136129.aspx
Related
I want to make a purchase order manager, where a queue is created from a database and then assembled into an accordion. Then, the user can look at requests, and then check the request when it is done. The task will then move to a "completed purchases" list.
I've been using a "notPurchased" datastore with the following server script:
query.filters.purchased._equals = false;
return query.run();
And then when the "submit" button is pressed, I call datastore.load();. However, this doesn't seem to refresh the purchase queue immediately. I have to completely refresh the page in order to see purchase request moved to 'completed'. How do I make this change instantaneous?
I figured out a solution that reduced any lag. Instead of filtering the database with a query, I bound the 'visibility' property to the proper boolean flag. Now items move instantly!
I have my reducer with a starting state of an empty array:
folderReducer(state:Array<Folder> = [], action: Action)
I'd like to populate the starting state, so when I do
store.subscribe(s => ..)
The first item I get comes from the database. I assume the way of doing this is with ngrx/effects, but I'm not sure how.
Your store always has the initial state, that you define in the reducer-function. The initial states main purpose is to ensure that the application is able to start up and not run into any null-pointer-exceptions. And also it sets up your application to start making the first api-calls ect. - so you can think of it as a technical initial state.
If you want to fill your store with api-data on the startup, you would do that on the same way that you add/modify data during any other action - just that the action of "initially loading data" is not triggered by some user-interaction but through:
either when your root-component loads
or as part of a service in the constructor
In case you want to prevent specific components from showing anything until your API-call is done, you would have to adjust the display-components to display or hide data based on your state (e.g. by implementing a flag in your satet initialDataLoaded).
A dynamic initial state is now supported, see: https://github.com/ngrx/platform/blob/master/docs/store/api.md#initial-state-and-ahead-of-time-aot-compilation
Also see: https://github.com/ngrx/platform/issues/51
I would only do this if the database is local, otherwise the request will hold up loading of the application.
There are multiple ways to leverage ASP.Net MVC response/output caching. At the simplest you can cache a simple page that's the same for everyone:
[OutputCache(Duration=24*3600)] // cache 1 day
public ViewResult Index() ...
You can vary by specific params, you can bust the cache by a custom key. In all of those cases, the declarative OutputCacheAttribute is used to determine whether the page should just serve from cache. If it does serve from cache, the Action doesn't fire - CPU time saved.
So, suppose the Action accepts an Id, meaning its contents vary id to id. Suppose you want to bust the cache for specific Ids when their underlying data changes. MSDN says to set VaryByCustom inside the Action instead of declaratively in OutputCacheAttribute:
Response.Cache.SetVaryByCustom
Like:
[OutputCache(Duration=24*3600, VaryByParam="id")]
public async Task<ViewResult> Thing(string id)
{
Response.Cache.SetVaryByCustom("thing-" + id);
// Some big load of work we'd like to avoid when a ton of visitors hit
// the server goes here.
So... in every scenario until this one, that big load of work in the Action gets skipped if the page is valid in the cache. But it appears here it's not - unless SetVaryByCustom can interrupt the Action? How does this command work exactly?
If it doesn't interrupt the Action, is there some follow-on check I can do to see if the cache picked it up, so I can return early? And what would I return, given it's normally expecting a whole page filled with data?
Based on testing, it appears to work neither of the ways I proposed.
In my Action with this strategy applied, I:
Fire a log event
Fire SetVaryByCustom(id);
Fire another log event
And here's what I saw:
BrowserA: Visit a given id
Log: Both events fire - before and after
BrowserB: Visit same id
Log: (nothing)
BrowserA: Change id so an Invalidate fires for id
BrowserB: Visit id, sees 200
Log: Both events fire - before and after
BrowserA: Visit id, sees 200
Log: (nothing)
BrowserB: Visit id, sees 304
Log: (nothing)
In other words, the entire Action doesn't fire, just like in the static/declarative approach where it's all done in OutputCacheAttribute. What's pretty strange is each time it invalidates, the key gets an opportunity to change - you could pass a new key to SetVaryByCustom once per invalidation, but not more.
Unless you explicitly tell ASP.Net not to, the browser is also told to cache these pages, for the length of time remaining in the 24-hr period (via max-age). That means depending on how your visitors arrive, they may not see the page change as you intended. You can prevent this of course with Location=OutputCacheLocation.Server in your OutputCacheAttribute.
In any case, my core objective is in fact met - the server skips the CPU cost of the Action - just a bit more than I anticipated.
I have 2 "limit" queries on the same path. I first load a "limit(1)", and then later load a "limit(50)".
When I load the second query, the child_added events don't fire in-order. Instead, the last item in the list (the one returned by limit(1)) is fired first, and then all of the other items are fired in-order, like this:
**Limit 1:**
new Firebase(PATH).limit(1).on('child_added', ...)
Message 5
**Limit 2:**
new Firebase(PATH).limit(50).on('child_added', ...)
Message 5
Message 1
Message 2
Message 3
Message 4
I'm confused why "Message 5" is being called first in the second limit operation. Why is this happening, and how can I fix it?
I know this may seem strange, but this is actually the intended behavior.
In order to guarantee that local events can fire immediately without communicating with the server first, Firebase makes no guarantees that child_added events will always be called in sort order.
If this is confusing, think about it this way: If you had no internet connection at all, and you set up a limit(50), and then called push() on that reference, you would you expect an event to be fired immediately. When you reconnect to the server though, it may turn out that there were other items on the server before the one you pushed, which will then have their events triggered after the event for the one you added. In your example, the issue has to do with what data has been cached locally rather than something written by the local client, but the same principle applies.
For a more detailed example of why things need to work this way, imagine 2 clients, A and B:
While offline, Client A calls push() and sets some data
While online, Client B adds a child_added listener to read the messages
Client B then calls push(). The message it pushed triggers a child_added event right away locally.
Client A comes back online. Firebase syncs the data, and client B gets a child_added event fired for that data.
Now, note that even though the message Client A added comes first in the list (since it has an earlier timestamp), the event is fired second.
So as you see, you can't always rely on the order of events to reflect the correct sort order of Firebase children.
But don't worry, you can still get the behavior you want! If you want the data to show up in sort order rather than in the order the events arrived on your client, you have a couple of options:
1) (The naive approach) Use a "value" event instead of child_added, and redraw the entire list of items every time it fires using forEach. Value events only ever fire for complete sets of data, so forEach will always enumerate all of the events in order. There's a couple of downsides to this approach though: (1) value events won't fire until initial state is loaded from the server, so it won't work if the app is started in "offline mode" until a network connection can be established. (2) It inefficiently redraws everything for every change.
2) (The better approach) Use the prevChildName argument in the callback to on(). In addition to the snapshot, the callback for on() is passed the name of the previous child in in the query when items are placed in sort order. This allows you to render the children in the correct order, even if the events are fired out of order. See: https://www.firebase.com/docs/javascript/firebase/on.html
Note that prevChildName only gives the previous child in the query, not in the whole Firebase location. So the child at the beginning of the query will have a prevChildName of null, even if there is a child on the server that comes before it.
Our leaderboard example shows one way to manipulate the DOM to ensure things are rendered in the proper order. See:
https://www.firebase.com/tutorial/#example/leaderboard
I have a system set up to lock certain content in a database table so only one user can edit that content at a time. Easy enough and that part is working fine. But now I'm at a road block of how to send a request to "unlock" the content. I have the stored procedure to unlock the content, but how/where would I call it when the user just closes their browser?
You also can't know when the user turns off his computer. You have to do it the other way around.
Require that the lock be renewed periodically. Only the web site would do the periodic renewal. If the user stops using the web site, then the lock expires.
Otherwise, require the user to explicitly unlock the content. Other users who want to edit the content can then go yell at the first user when they can't do their jobs. Not a technological solution, but still a good one. Shame works.
The best thing you can really do is add something to your Session_End in your global.asax. Unfortunately, this won't fire until the session times out.
When the user clicks the "X" in their browser, there isn't anyway to guarantee the browser will send you anything back.
A quick note on the Session_End approaches. If you use this method, then you have to ensure
That sessionstate is InProc, eg. add something like this to your Web.config
<sessionState mode="InProc" timeout="timeout_in_minutes"/>
Make sure that you've setup IIS as to not recycle worker processes during normal operation (see for instance this blog post).
Edit:
Not directly answering the question directly, but another approach would be to use Optimistic concurrency control on the data in question.
There is such event as "user closes browser".
Nevertheless, I can think of two workarounds:
Use Javascript/Ajax to permanently
(lets say every 10 seconds) call a
method in your page. The DateTime of
your last query needs to be stored
somewhere. Now you write a windows
service that checks every second
which session are timed out. Perform
your custom action there.
Use the global.asax Session_End()
-Event. (cannot be used with every SessionState, look up for which ones
it is usable)
Trying to leave a stackoverflow answer page pops up an "are you sure" dialog. Perhaps during the on-page-leave event that SO uses (or however SO does this), you can send a final request with an XmlHttpRequest object. This won't cover if the browser process closes unexpectedly (use session_onend for that), but it will at least send the "I'm closed" event earlier
I think your one stored procedure can do the locking and unlocking (used with "Select #strNewMax As NewMax")...
Here is an example from a system I have:
Declare #strNewMax Char
Select #strNewMax = 'N'
BEGIN TRANSACTION
/* Lock only the rows for this Item ID, and hold those locks throughout the transaction. */
If #BidAmount > (Select Max(AB_Bid_AMT) from AuctionBid With(updlock, holdlock) Where AB_AI_ID = #AuctionItemId)
Begin
Insert Into AuctionBid (AB_AI_ID, AB_Bid_AMT, AB_Emp_ID, AB_Entry_DTM)
Select #AuctionItemId, #BidAmount, #EmployeeId, GetDate()
Select #strNewMax = 'Y'
End
COMMIT TRANSACTION
Select #strNewMax As NewMax
This will insert a record as the next highest bid, all while locking the entire table, so no other bids are processed at the same time. It will return either a 'Y' or 'N' depending on if it worked or not.
Maybe you can take this and adjust it to fit your application.