Meteor app, typical pattern, I have publish on a server, subscribe on a client.
Reactivity is great, but now I have a need to let client synchronize its local minimongo (or, lets say, fetch new values from server) only each, lets say, 30 seconds.
Is there a way to do so? In other words I must be able to delay synchronisation for n seconds and repeat it every n seconds also.
The only pattern comes in mind right now is a very dirty one - just use an another helper for layout that only updates each n seconds, but that doesn't save me traffic because synchronisation will happen anyway, I will only visually make it like its been synchronized not in real time.
Seems like you don't necessarily want to prevent the subscription itself from stopping/starting (this would get difficult as meteor will think there is no data and will remove everything reactively).
Really you just want to prevent the UI from updating as often. One way to do that is the following, which will change the local cursor query to be temporarily reactive (allowing the DOM to update) every 5 seconds, and then non-reactive right away:
# client.coffee
Meteor.setInterval ->
Session.set('reactive', true)
Session.set('reactive', false)
, 5000
Template.test.helpers
docs: -> Collection.find {}, {reactive:Session.get('reactive')}
This would be my initial approach just to demo the concept, and it seems pretty hacky; it works in a tiny app but I haven't tested it in anything big. I've never seen this kind of thing being used in a real app, but understand why you might want it.
another approach is to add a updateTimestamp to each document. Then you can publish all the documents until a specific time-stamp and update this every 30 seconds. Making sure you do not get the documents every time they are added or changed
the biggest difficulty would be to manage the time difference between the client and the server.
Meteor.publish("allPosts", function(until){
return Posts.find({updateTimeStamp: {$lte: until}});
});
and on the client
Meteor.setInterval(function(){
Meteor.subscribe("allPosts", new Date());
}, 30000)
Related
In my turn based online game I have a timer in-game, that ticks down from 24 hours to 0, when it reaches 0 for any player they have lost.
When a player makes their turn they write something like this to the database:
action: "not important"
timeStamp: 1670000000
What I want is for either of the two players to be able to get into the ongoing game at any time, read "timeStamp" and set the clock accordingly, showing how much time is left since the last action.
When writing to the database I am using ServerValue.TIMESTAMP (Android). I am aware of the ability to estimate the server time using ServerTimeOffset described here;
https://firebase.google.com/docs/database/android/offline-capabilities#server-timestamps
But I feel it's not always accurate when testing, so I wanted to explore if there is any other way to do this. What I really want is to get the actual server timestamp when reading the node:
timeLeft = actionTimeStamp - currentServerTime + 24h
Is this possible to do in ONE call? I am using RTDB, but I am open to moving to Firestore if it is possible there somehow.
There's no way to get the server timestamp without writing it to the database, but you can of course have each client write it and then immediately read it back.
That said, it shouldn't make much of a difference from using the initial start time that was written and the serverTimeOffset value.
For a working example, have a look at How to implement a distributed countdown timer in Firebase
I'm writing a small game for Android in Unity. Basically the person have to guess whats on the photo. Now my boss wants me to add an additional function-> after successful/unsuccessful guess the player will get the panel to rate the photo (basically like or dislike), because we want to track which photos are not good/remove the photos after a couple of successful guesses.
My understanding is that if we want to add +1 to the variable in Firebase first I have to make the call and get it then we have to make a separate call with adding 1 to the value we got. I was wandering if there is a more efficient way to do it?
Thanks for any suggestions!
Instead of requesting firebase when you want to add ,you can request firebase in the beginning (onCreate like method) and save the object and then use it when you want to update it.
thanks
Well, one thing you can do is to store your data temporarily in some object, but NOT send it to Firebase right away. Instead, you can send the data to Firebase in times when the app/game is about to get paused/minimized; hence, reducing potential lags and increasing player satisfaction. OnApplicationPause(bool) is one of such functions that gets called when the game is minimized.
To do what you want, I would recommend using a Transaction instead of just doing a SetValueAsync. This lets you change values in your large shared database atomically, by first running your transaction against the local cache and later against the server data if it differs (see this question/answer).
This gets into some larger interesting bits of the Firebase Unity plugin. Reads/writes will run against your local cache, so you can do things like attach a listener to the "likes" node of a picture. As your cache syncs online and your transaction runs, this callback will be asynchronously triggered letting you keep the value up to date without worrying about syncing during app launch/shutdown/doing your own caching logic. This also means that generally, you don't have to worry too much about your online/offline state throughout your game.
Firebase has an interesting feature/nuisance where when you listen on a data ref, you get all the data that was ever added to that ref. So, for example, when you listen on 'child_added', you get a replay of all the children that were added to that ref from the beginning of time. We are writing a commenting system with a dataset that looks something like this:
/comments
/sites
/sites/articles
/users
Sites have many articles and articles have many comments and users have many comments.
We want to be able to track all the comments a user makes, so we feel it is wise to put comments in a separate ref rather than partition them by the articles they belong to. We have a backend listener that needs to do things on new comments as they arrive (increment their child counts, adjust a user's stats etc.). My concern is that, after a while, it will take this listener a long time to start up if it has to process a replay of every comment ever made.
I thought about possibly storing comments only in articles and storing references to each comment's siteId/articleId/commentId in the user table so we could still find all the comments for a given user, but this complicates the backend, as it would then probably need to have a separate listener for each site or even each article, which could make it difficult to manage so many listeners.
Imagine if one of these articles is on a very high-traffic site with tens of thousands of articles and thousands of comments per article. Is the scaling answer to somehow keep track of the traffic levels of every site and set up and partition them in a way that they are assigned to different worker processes? And what about the question of startup time and how long it takes to replay all data every time we load up our workers?
Adding on to Frank's answer, here are a couple other possibilities.
Use a queue strategy
Since the workers are really expecting to process one-time events, then give them one-time events which they can pull from a queue and delete after they finish processing. This resolves the multiple-worker scenario elegantly and ensures nothing is ever missed because a server was offline
Utilize a timestamp to reduce backlog
A simple strategy for avoiding backlog during reboot/startup of the workers is to add a timestamp to all of the events and then do something like the following:
var startTime = Date.now() - 3600 // an hour ago
pathRef.orderByChild('timestamp').startAt( startTime );
Keep track of the last id processed
This only works well with push ids, since formats that do not sort naturally by key will likely become out of order at some point in the future.
When processing records, have your worker keep track of the last record it added by writing that value into Firebase. Then one can use orderByKey().startAt( lastKeyProcessed ) to avoid the backlog. Annoyingly, we then have to discard the first key. However, this is an efficient query, does not cost data storage for an index, and is quick to implement.
If you only need to process new comments once, you can put them in a separate list, e.g. newComments vs. comments (the ones that have been processed). The when you're done processing, move them from newComments to comments.
Alternatively you can keep all comments in a single list like you have today and add a field (e.g. "isNew") to it that you set to true initially. Then you can filter with orderByChild('isNew').equalTo(true) and update({ isNew: false }) once you're done with processing.
I have an ASP.NET MVC 3 / .NET Web Application, which is heavily data-driven, mainly around the concept of "Locations" (New York, California, etc).
Anyway, we have some pretty busy database queries, which get cached after they are finished.
E.g:
public ICollection<Location> FindXForX(string x)
{
var result = _cache.Get(x.ToKey()) as Locaiton; // try cache
if (result == null) {
result = _repo.Get(x.ToKey()); // call db
_cache.Add(x.ToKey(), result); // add to cache
}
return result;
}
But i don't want to the unlucky first user to be waiting for this database call.
The database call can take anywhere from 40-60 seconds, well over the default timeout for an ASP.NET request.
I want to "pre-warm" these calls for certain "popular" locations (e.g New York, California) when my app starts up, or shortly after.
I don't want to simply do this in Global asax (Application_Start), because the app will take too long to start up. (i plan to pre-cache around 15 locations, so that's a few minutes of work).
Is there any way i can fire off this logic asynchronously? Maybe a service on the side is a better option?
The only other alternative i can think of is have an admin page which has buttons for these actions. So an administrator (e.g me) can fire off these queries once the app has started up. That would be the easiest solution.
Any advice?
The quick and dirty way would be to fire-off a Task from Application_Start
But I've found that it's nice to wrap this functionality into a bit of infrastructure so that you can create an ~/Admin/CacheInfo page to let you monitor the progress, state, and exceptions that may be in the process of loading up the cache.
Look into "Always running" app setting for IIS 7.5. What this basically do is have an app pool ready whenever the existing one is to be recycled. Of course, the very first would take the 40-60 seconds but afterwards things would be fast unless you physically restart the machine.
Before you start cache warming, I suggest you check that the query is "as fast as it can be" by first looking at how many logical reads it is doing.
Sounds like you should just dump the results in a separate table and have a scheduled task to repopulate that table periodically.
If one pre-calculated table isn't enough because it ends up with too much data that you need to search through, you could use more than one.
One solution is to launch a worker thread in your Application_Start method that does the pre-warming in the background. If you do it right, your app won't take longer to start up, because the thread will be executed asynchronously.
One option is to use a website health monitoring service. It can be used to both check website health, and if scheduled frequently enough, to invoke your common URLs.
Doing the loading in a Task from Application_Start is the way to go, as mentioned by Scott.
Just be careful - if your site restarts and 10 people try to view California, you don't want to end up with 10 instances of _repo.Get(x.ToKey()); // call db simultaneously trying to load the same data.
It might be a good idea to store a boolean value "IsPreloading" in the application state. Set it to true at the start of your preload function and false at the end. If the value is set, make sure you don't load any of your 15 preloaded locations in FindXForX.
Would suggest taking a look at auto-starting your app, especially if you are load balanced.
I need to keep certain data ( in a grid) up to date
and was gonna do a poll to the server every 15 seocnds or so to get the data and refresh the grid, however it feels a bit dirty ( the grid will have the loading icon every 15 sec..) doesnt look great...
Another option is to check if there is new data and compare the new data with the current data and only refresh the grid if there is any changes ( I would have to do this client side tho because maintaing the current state of every logged in user also seems like an overkill)
I m sure there are better solutions and would love to hear about them
I heard about COMET, but tit seems to be a bit of an overkill
BTW i m using asp.net MVC on the server side
I d like to hear what people have to say for or against continuos polling with js
Cheers
Sounds like COMET is indeed the solution you're looking for. In that scenario, you don't need to poll, nor do comparisons, as you can push out only the "relevant" changed data to your grid.
Check out WebSync, it's a nice comet server for .NET that'll let you do exactly what you've described.
Here's a demo using ExtJS and ASP.NET that pushes a continuous stream of stock ticker updates. The demo is a little more than you need, but the principal is identical.
Every time you get the answer from the server, check if something has changed.
Do a request. Do let the user know that you are working with some spinner, don't hide it. Schedule the next request in 15 seconds. The next request executes; if nothing has changed, schedule the next one in 15 + 5 seconds. The next request executes; if nothing has changed, schedule the next on in 15 +5 +5 seconds. And so on. The next request executes; if something has indeed changed, reset the interval to 15 seconds.
Prototype can do this semi-automatically with Ajax.PeriodicalUpdater but you probably need stuff that is more customized to your needs.
Anyway, just an idea.
As for continuous polling in general; it's bad only if you hit a different site (using a PHP "bridge" or something like that). If you're using your own resources you just have to make sure you don't deplete them. Set decent intervals with a decay.
I suggest Comet is not an overkill if "updates need to be constant." 15 seconds is very frequent; is your visited by many? Your server may be consumed serving these requests while starving others.
I don't know what your server-side data source looks like, or what kind of data you're serving, but one solution is to server your data with a timestamp, and send a timestamp of the last poll with every subsequent request.
Poll the server, sending the timestamp of when the service was last polled (eg: lastPollTime).
The server uses the timestamp to determine what data is new/updated and returns only that data (the delta), decreasing your transmission size and simplifying your client-side code.
It may be empty, it may be a few cells, it may be the entire grid, but the client always updates data that is returned to it because it is already known to be new.
The benefits of this method are that it simplifies your client side code (which is less code for the client to load), and decreases your transmission size for subsequent polls that have no new data for the user.
Also, this allows you to maintain state on the server side because you don't have to save a state for each individual user. You just have one state, the state of the current data, that is differentiated by access time.
I think checking if there is any new data is a good option.
I would count the number of rows in the database and compare that with the number of rows in your (HTML) table. If they're not the same, get the difference in rows.
Say you got 12 table rows and there are 14 database rows as you check: Get the latest (14 - 12) = 2 rows.