In a web app I have this:
function onChildAdded(snapshot) {
// ...
}
someFirebaseLocation.on('child_added', onChildAdded);
I'm looking for a 100% reliable way to detect whether the child_added event is immediate, so that I can handle the two cases correctly: when after push() the function gets called immediately (sync) vs when the function gets called async.
Setting a flag before the push() call is not reliable I think. (Potential race condition when an async event comes in, and the flag might not get reset when there's an error).
Another option would be
var pushed = push(...);
and then in child_added
if (snap.name() === pushed)
but an incoming message could have the same .name() thus there could be collisions. The probability of a clash is debatable, but I'd prefer a simple and watertight way to get the info.
It would be great if I could do this:
function onChildAdded(snapshot, prevChildName, isImmediateEvent) {
if (isImmediateEvent) {
// Handle as sync event.
} else {
// Handle as async event.
}
}
someFirebaseLocation.on('child_added', onChildAdded);
or this
function onChildAdded(snapshot, prevChildName) {
if (snapshot.isFromImmediateEvent) {
// Handle as sync event.
} else {
// Handle as async event.
}
}
someFirebaseLocation.on('child_added', onChildAdded);
Is there some other reliable option? Otherwise I'll ask the Firebase guys whether they could generally pass a bool "isImmediateEvent" into the callback (after snapshot,prevChildName).
Tobi
You've covered the two options for now and either one should work reliably (see notes below). We might add features in the future to make this easier, but nothing concrete is planned at this point.
A couple notes:
Setting a flag should work fine. No async events will happen until after your synchronous code has finished running. You can avoid the error issue by using a try/finally block to reset it.
push() id's are designed to be universally unique, so you really shouldn't worry about conflicts.
Related
Is it possible to stop (kill) asynchronous Call?
In my app I have at client side sth like:
Meteor.call('doCalculation', function(err, result) {
//do sth with result
});
'doCalculation' may take long time (this is ok) I dont want user to start new call when he/she has already one running call, I want to allow user to stop current call and submit new one. How correctly do this?
The only idea I have is to communicate between client and server using mongo. In some place in 'doCalculation' function I can observe some mongo document/collection and based on this do sth in the function (e.g. call exception). Do you have any better ideas?
You can use a semaphore for this purpose. When the semaphore is 1, requests are allowed to be sent. When the semaphore is 0, requests are not allowed to be sent. The semaphore should be 1 by default and just before you send the request, you need to set it to 0. When a response is successful, you set the semaphore back to 1.
As about the timeout: You could use a time out using setTimeout after sending the request, like this:
if (semaphore) {
var isTimedOut = false;
var isSuccess = false;
semaphore = 0; //No need to use var keyword, as this should be declared outside of this scope
Meteor.call('doCalculation', function(err, result) {
isSuccess = true;
//do sth with result
});
setTimeout(function() {
if (!isSuccess) {
isTimeout = true;
//do something else, to handle the time out state
}
}, 10000);
}
This is tricky, because you cannot generally set timeouts from the client's point of view. You don't need to, for a bunch of architectural reasons. The most important thing is that if you lose network connectivity or the server crashes (two cases timeouts are designed to manage), the client is aware immediately because it is disconnected. You can use Meteor.status().connected if this happens often.
It sounds like you're running a long calculation on the server. My suggestion is to return a calculationId immediately, and then update a collection with progress, e.g., CalculationProgresses.update(calculationId, {$set: {progress: currentProgress}}) as you calculate. Your UI can then update the progress reactively, in the most convenient way possible.
Note, that when you do run long calculations on the server, you need to occasionally "yield," giving the chance for other work to happen. Node, on which Meteor is based, is tricky for long calculations if you don't master this notion of yielding. In Meteor, you can yield easily by updating a collection (e.g., your progress collection). This will solve lots of problems you're probably experiencing as you write your application.
i think you need a server-side solution for this. if you go with a client-side solution, you don't handle 2 cases:
the user reloads their browser
the user uses 2 browsers
i would create these methods:
isCalculationActive() -- this checks if the user already has a calculation active. on the server, you can either keep that fact in memory or write it to the db. on the client, if this returns false, then you can proceed to call doCalculation(). if true, you can give the user a popup or alert or something to ask if they want to cancel and proceed.
doCalculation() -- this cancels any outstanding calculation by that user and starts a new one.
with these implemented, the user can reload their browser w/o affecting either the running calculation or correct behavior. and if they try a 2nd browser, everything should still work as expected.
if you want to give the user the option to simply stop the job and not start a new one, then you can simply create:
cancelCalculation() -- this cancels any outstanding calculation by that user.
Is there a way to know when data has been initially fully fetched from the server after running Deps.autorun for the first time?
For example:
Deps.autorun(function () {
var data = ItemsCollection.find().fetch();
console.log(data);
});
Initially my console log will show Object { items=[0] } as the data has not yet been fetched from the server. I can handle this first run.
However, the issue is that the function will be rerun whenever data is received which may not be when the full collection has been loaded. For example, I sometimes received Object { items=[12] } quickly followed by Object { items=[13] } (which isn't due to another client changing data).
So - is there a way to know when a full load has taken place for a certain dependent function and all collections within it?
You need to store the subscription handle somewhere and then use the ready method to determine whether the initial data load has been completed.
So if you subscribe to the collection using:
itemSub = Meteor.subscribe('itemcollections', blah blah...)
You can then surround your find and console.log statements with:
if (itemSub.ready()) { ... }
and they will only be executed once the initial dataset has been received.
Note that there are possible ocassions when the collection handle will return ready marginally before some of the items are received if the collection is large and you are dealing with significant latency, but the problem should be very minor. For more on why and how the ready () method actually works, see this.
Meteor.subscribe returns a handle with a reactive ready method, which is set to true when "an initial, complete snapshot of the record set has been sent" (see http://docs.meteor.com/#publish_ready)
Using this information you can design something simple such as :
var waitList=[Meteor.subscribe("firstSub"),Meteor.subscribe("secondSub"),...];
Deps.autorun(function(){
// http://underscorejs.org/#every
var waitListReady=_.every(waitList,function(handle){
return handle.ready();
});
if(waitListReady){
console.log("Every documents sent in publications is now available.");
}
});
Unless you're prototyping a toy project, this is not a solid design and you probably want to use iron-router (http://atmospherejs.com/package/iron-router) which provides great design patterns to address this kind of problems.
In particular, take a moment and have a look at these 3 videos from the main iron-router contributor :
https://www.eventedmind.com/feed/waiting-on-subscriptions
https://www.eventedmind.com/feed/the-reactive-waitlist-data-structure
https://www.eventedmind.com/feed/using-wait-waiton-and-ready-in-routes
I am trying to implement something like this:
/* We use the command pattern to encode actions in
a 'command' object. This allows us to keep an audit trail
and is required to support 'undo' in the client app. */
CommandQueue.insert(command);
/* Queuing a command should trigger its execution. We use
an observer for this. */
CommandQueue
.find({...})
.observe({
added: function(command) {
/* While executing the action encoded by 'command'
we usually want to insert objects into other collections. */
OtherCollection.insert(...)
}
});
Unfortunately it seems that meteor keeps the prior state of the OtherCollection while executing the transaction on CommandQueue. Changes are made temporarily to the OtherCollection. As soon as the transaction on CommandQueue finishes, the prior state of the OtherCollection will be restored, though, and our changes disappear.
Any ideas why this is happening? Is this intended behaviour or a bug?
This is the expected behavior, though it is a little subtle, and not guaranteed (just an implementation detail).
The callback to observe fires immediately when the command is inserted into CommandQueue. So the insert to OtherCollection happens while the CommandQueue.insert method is running, as part of the same call stack. This means the OtherCollection insert is considered part of the local 'simulation' of the CommandQueue insert, and is not sent to the server. The server runs the CommandQueue insert and sends the result back, at which point the client discards the results of the simulation and applies the results sent from the server, making the OtherCollection change disappear.
A better way to do this would be to write a custom method. Something like:
Meteor.methods({
auditedCommand: function (command) {
CommandQueue.insert(command);
var whatever = someProcessing(command)
OtherCollection.insert(whatever);
}
});
Then:
Meteor.call('auditedCommand', command);
This will show up immediately on the client (latency compensation) and is more secure as clients can't insert to CommandQueue without also adding to OtherCollection.
EDIT: this will probably change. The added callback shouldn't really be considered part of the local simulation of CommandQueue.insert. Thats just the way it works now. That said, a custom method is probably still a better approach for this, it will work even if other people add commands to the command queue, and is more secure.
I'm not sure about your observe behavior but we accomplished the same thing using a server-side allow method:
CommandQueue.allow ({
insert: function (userId, doc) {
OtherCollection.insert(...);
return (userId && doc.owner === userId);
}
});
This is also more secure than putting this logic client side.
[Edit]
The main question here loosely translates as 'is Flex multi-threaded'? I have since found out that it is not, so I won't have data mysteriously changing half way through an operation. The code below worked, but made things awkward and confusing. I eventually fixed the problem with an architecture change, eliminating the need to suppress events. As the first commenter suggested.
Infinite loops were eliminated by changing the way events were listened to and performing certain actions explicitly rather than via events.
Collating events was made easier using a command pattern.
Basically, do not use the code below if you come across this page!
[/Edit]
I'm building some Flex applications using a simple, lightweight MVC pattern. Models extend or encapsulate a dispatcher and fire events when updated. I'm stuck with Flex 3.5.
In some situations, I'll want to suppress these events to avoid infinite loops or help collate multiple actions into a single event.
My first stab at a solution that doesn't litter the models with unnecessary and confusing code is this:
private var _suppressEvents:Boolean = false;
public function suppressEvents(callback:Function):void
{
// In case of error, ensure the suppression is turned off, then re-throw
var err:Error = null;
_suppressEvents = true;
try
{
callback();
}
catch(e:Error)
{
err = e;
}
_suppressEvents = false;
if (err)
{
throw (err);
}
}
public function dispatch(type:String, data:*):void
{
// Suppress if called from a suppress callback.
if (!_suppressEvents)
{
_dispatcher.dispatchEvent(new DataEvent(type, data));
}
}
Obviously I call 'suppressEvents' with a function containing the model code I wish to run.
My questions:
1: Is there a chance I could accidentally lose events using this technique?
2: Do I need to think about any other error edge cases when it comes to ensuring I don't accidentally end up in a suppressed state after a call?
3: Is there a cleaner way I'm missing?
Thanks!
The applications in my project were until now communicating over qtdbus using synchronous calls. However I now need to convert a few of these calls to asynchronous.
For that I chose to use this API available in qtdbus
QDBusAbstractInterface::callWithCallback
But the problem is that the current implementation has these qtdbus sync calls scattered in a lot of places in the code and the code snippets which follow these sync calls assume that the control only reaches them when the preceding call has been successfully serviced and a reply is obtained.
This will no longer be the case when the calls change to async. Moreover the calls are made in different contexts, so I will be required to maintain the state of the system before each qtdbus call, so that I know what to do when I receive the reply.
Is there any chance really to somehow convert the calls to async without rupturing the fabric of the current code in a big way?
One approach I can think of is to use the FSM pattern.
Any hints or design suggestions will be much appreciated.
Thanks!
The way I am understanding is that you will need to call the same method and then process the return value differently based on the state at the time of the call. Such as
void function()
{
//do stuff
value = SynchronousCall();
if (state == 1)
{
doSomething(value);
}
else
{
doSomethingElse(value);
}
}
I would recommend instead of a full implementation of the Finite State Machine pattern which can make a mess with the number of classes that it adds, add separate methods for each state
void function()
{
//do stuff
if (state == 1)
{
callback = *doSomething(ValueType);
}
else
{
callback = *doSomethingElse(ValueType);
}
callWithCallback(method,args, receiver,callback,error);
}
Then in each method you can assume the state and process the return value accordingly.
Another slightly (very) hacky way would be to simply have a spin wait after all the asynchronous calls and use a QThread:: yield() in the loop while you wait for the value to return. That way it is still technically an asynchronous call but it acts synchronous.