Queuing asynchronous actions in reflux - asynchronous

When using RefluxJS stores with asynchronous actions, you can easily end up having race conditions between your actions.
Abstract description of the issue
For example, our store is in state X. An async action A is called from X, and before it finishes, another async action B is called, also from X. From here, no matter which action finishes first, it goes wrong.
B finishes first with a state Y1, A finishes last and overwrites the state Y1 with Y2.
A finishes first with a state Y2, B overwrites Y2 with Y1.
The desired behavior would be to have:
A B
X -> Y -> Z
Where B is not based on X, but on Y, and leads to a consistent Z state, instead of two actions based on the same state, leading to an inconsistent state:
A
X -> Y1 .--> Y2
\ /
'----'
B
Implemented example of the issue
I wrote a minimal working example, running with Node, of the problem I am talking about.
var Q = require('q');
var Reflux = require('reflux');
var RefluxPromise = require('reflux-promise');
Reflux.use(RefluxPromise(Q.Promise));
var AsyncActions = Reflux.createActions({
'add': { asyncResult: true }
});
var AsyncStore = Reflux.createStore({
init: function () {
// The state
this.counter = 0;
AsyncActions.add.listenAndPromise(this.onAdd, this);
},
// Increment counter after a delay
onAdd: function(n, delay) {
var that = this;
return apiAdd(this.counter, n, delay)
.then(function (newCounter) {
that.counter = newCounter;
that.trigger(that.counter);
});
}
});
// Simulate an API call, that makes the add computation. The delay
// parameter is used for testing.
// #return {Promise<Number>}
function apiAdd(counter, n, delay) {
var result = Q.defer();
setTimeout(function () {
result.resolve(counter + n);
}, delay);
return result.promise;
}
// Log the store triggers
AsyncStore.listen(console.log.bind(undefined, 'Triggered'));
// Add 3 after 1 seconds.
AsyncActions.add(3, 1000);
// Add 100 almost immediately
AsyncActions.add(100, 1);
// Console output:
// > Triggered 100
// > Triggered 3
// Desired output (queued actions):
// > Triggered 3
// > Triggered 103
With these dependencies in package.json
{
"dependencies": {
"q": "^1.3.0",
"reflux": "^0.3",
"reflux-promise": "^1"
}
}
Nature of the question
I expected RefluxJS to queue actions, but it doesn't. So I am looking for a way to order these actions correctly. But even if I managed to queue up these actions in some way (so B is issued after A) how can I be certain that, when A finishes, issuing B is still a valid action ?
Maybe I am using RefluxJS the wrong way in the first place, and this scenario does not happen in a properly structured app.
Is queuing of the asynchronous actions (assuming this is possible within a Reflux app) the solution ? Or should we work on avoiding these scenarios in the first place, somehow ?

Your example seems like more of an issue with the concept of "source of truth" than anything else. You're storing the current state of the number ONLY client side, but ONLY updating it after receiving confirmation from the server side on an operation being done to it.
Of course that'll make issues. You're mixing the actions upon the number and the storage of the number in a weird way where there's no single source of truth for what the number is at any given moment. It's in limbo between the time when the action is called finished...and that's no good.
Either store the number client side, and every time you add to it, add to that number directly and then tell the server side what the new number is... (i.e. the client side is taking responsibility as the source of truth for the number while the client side runs)
OR store the number server side, and every time you up it with an action from the client side, the server returns the new updated number. (i.e. the source of truth for the number is completely server side).
Then, even if race issues occur, you still have a source of truth for what the number is, and that source can be checked and confirmed. For example, if the server side holds the source of truth for the number then the API can also return a timestamp for the status of that value every time it returns it, and you can check it against the last value you got from the API to make sure you're ACTUALLY using the newest value.

Related

Allow a future to store a pointer to a pinned value in its container

Prelude
I have been working on this segment of code that attempts to provide a recyclable API for implementing an asynchronous stream for a REST paginator.
I have gone through many iterations and settled on storing state in an enumerable that describes at what point the process is at, both because I feel that it is the best fit for this purpose and also because it is something to learn from, being especially explicit about the whole process. I do not want to use stream! or try_stream! from the async-stream crate.
The state begins at Begin, and moves a PaginationDelegate into the next state after using it to make a request. This state is Pending and owns the delegate and a future that is returned from PaginationDelegate::next_page.
The issue appears when the next_page method needs a reference, &self, but the self is not stored on the stack frame of the future that is stored within the Pending state.
I wanted to keep this "flat" because I find the algorithm easier to follow, but I also wanted to learn how to create this self-referential structure the most correct way. I am aware that I can wrap the future and have it own the PaginationDelegate, and indeed this may be the method I end up using. Nevertheless, I want to know how I could move the two values into the same holding structure and keep the pointer alive for my own education.
Delegate Trait
Here a PaginationDelegate is defined. This trait is intended to be implemented and used by any method for function that intends to return a PaginatedStream or dyn Stream. Its purpose is to define how the requests will be made, as well as store a limited subset of the state (the offset for the next page from the REST API, and the total number of items that are expected from the API).
#[async_trait]
pub trait PaginationDelegate {
type Item;
type Error;
/// Performs an asynchronous request for the next page and returns either
/// a vector of the result items or an error.
async fn next_page(&self) -> Result<Vec<Self::Item>, Self::Error>;
/// Gets the current offset, which will be the index at the end of the
/// current/previous page. The value returned from this will be changed by
/// [`PaginatedStream`] immediately following a successful call to
/// [`next_page()`], increasing by the number of items returned.
fn offset(&self) -> usize;
/// Sets the offset for the next page. The offset is required to be the
/// index of the last item from the previous page.
fn set_offset(&mut self, value: usize);
/// Gets the total count of items that are currently expected from the API.
/// This may change if the API returns a different number of results on
/// subsequent pages, and may be less than what the API claims in its
/// response data if the API has a maximum limit.
fn total_items(&self) -> Option<usize>;
}
Stream State
The next segment is the enum itself, which serves as the implimentor for Stream and the holder for the current state of the iterator.
Note that currently the Pending variant has the delegate and the future separate. I could have used future: Pin<Box<dyn Future<Output = Result<(D, Vec<D::Item>), D::Error>>>> to keep the delegate inside of the Future but prefer not to because I want to solve the underlying problem and not gloss over it. Also, the delegate field is a Pin<Box<D>> because I was experimenting and I feel that this is the closest I have gotten to a correct solution.
pub enum PaginatedStream<D: PaginationDelegate> {
Begin {
delegate: D,
},
Pending {
delegate: Pin<Box<D>>,
#[allow(clippy::type_complexity)]
future: Pin<Box<dyn Future<Output = Result<Vec<D::Item>, D::Error>>>>,
},
Ready {
delegate: D,
items: VecDeque<D::Item>,
},
Closed,
Indeterminate,
}
Stream Implementation
Last part is the implementation of Stream. This is incomplete for two reasons; I have not finished it, and it would be best to keep the example short and minimal.
impl<D: 'static> Stream for PaginatedStream<D>
where
D: PaginationDelegate + Unpin,
D::Item: Unpin,
{
// If the state is `Pending` and the future resolves to an `Err`, that error is
// forwarded only once and the state set to `Closed`. If there is at least one
// result to return, the `Ok` variant is, of course, used instead.
type Item = Result<D::Item, D::Error>;
fn poll_next(mut self: Pin<&mut Self>, ctx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
// Avoid using the full namespace to match all variants.
use PaginatedStream::*;
// Take ownership of the current state (`self`) and replace it with the
// `Indeterminate` state until the new state is in fact determined.
let this = std::mem::replace(&mut *self, Indeterminate);
match this {
// This state only occurs at the entry of the state machine. It only holds the
// `PaginationDelegate` that will be used to update the offset and make new requests.
Begin { delegate } => {
// Pin the delegate to the heap to ensure that it doesn't move and that pointers
// remain valid even after moving the value into the new state.
let delegate = Box::pin(delegate);
// Set the current state to `Pending`, after making the next request using the
// pinned delegate.
self.set(Pending {
delegate,
future: PaginationDelegate::next_page(delegate.as_ref()),
});
// Return the distilled verson of the new state to the callee, indicating that a
// new request has been made and we are waiting or new data.
Poll::Pending
}
// At some point in the past this stream was polled and made a new request. Now it is
// time to poll the future returned from that request that was made, and if results are
// available, unpack them to the `Ready` state and move the delegate. If the future
// still doesn't have results, set the state back to `Pending` and move the fields back
// into position.
Pending { delegate, future } => todo!(),
// The request has resolved with data in the past, and there are items ready for us to
// provide the callee. In the event that there are no more items in the `VecDeque`, we
// will make the next request and construct the state for `Pending` again.
Ready { delegate, items } => todo!(),
// Either an error has occurred, or the last item has been yielded already. Nobody
// should be polling anymore, but to be nice, just tell them that there are no more
// results with `Poll::Ready(None)`.
Closed => Poll::Ready(None),
// The `Indeterminate` state should have only been used internally and reset back to a
// valid state before yielding the `Poll` to the callee. This branch should never be
// reached, if it is, that is a panic.
Indeterminate => unreachable!(),
}
}
}
Compiler Messages
At the moment, in the Begin branch, there are two compiler messages where the borrow to the delegate (delegate.as_ref()) is taken and passed to the PaginationDelegate::next_page method.
The first is that the delegate does not live long enough, because the pinned value is moved into the new state variant Pending, and no longer resides at the position it was assigned. I do not understand why the compiler wants this to exist for 'static though, and would appreciate if this could be explained.
error[E0597]: `delegate` does not live long enough
--> src/lib.rs:90:59
|
90 | future: PaginationDelegate::next_page(delegate.as_ref()),
| ------------------------------^^^^^^^^^^^^^^^^^-
| | |
| | borrowed value does not live long enough
| cast requires that `delegate` is borrowed for `'static`
...
96 | }
| - `delegate` dropped here while still borrowed
I would also like to hear any methods you have for creating the values for fields of a struct that rely on data that should be moved into the struct (self-referential, the main issue of this entire post). I know it is wrong (and impossible) to use MaybeUninit here because any placeholder value that would later be dropped will cause undefined behavior. Possibly show me a method for allocating a structure of uninitialized memory and then overwriting those fields with values after they have been constructed, without letting the compiler attempt to free the uninitialized memory.
The second compiler message is as follows, which is similar to the first except that the temporary value for delegate is moved into the struct. I am to understand that this is fundamentally the same issue described above, but just explained differently by two separate heuristics. Is my understanding wrong?
error[E0382]: borrow of moved value: `delegate`
--> src/lib.rs:90:59
|
84 | let delegate = Box::pin(delegate);
| -------- move occurs because `delegate` has type `Pin<Box<D>>`, which does not implement the `Copy` trait
...
89 | delegate,
| -------- value moved here
90 | future: PaginationDelegate::next_page(delegate.as_ref()),
| ^^^^^^^^^^^^^^^^^ value borrowed here after move
Environment
This is real code but is already a MCVE I believe.
To set up the environment for this, the crate dependencies are as follows.
[dependencies]
futures-core = "0.3"
async-trait = "0.1"
And the imports that are used in the code,
use std::collections::VecDeque;
use std::pin::Pin;
use std::task::{Context, Poll};
use async_trait::async_trait;
use futures_core::{Future, Stream};
The potential solution that I did not want to use, because it hides the underlying issue (or rather avoids the intent of this question entirely) follows.
Where the PaginatedStream enumerable is defined, change the Pending to the following.
Pending {
#[allow(clippy::type_complexity)]
future: Pin<Box<dyn Future<Output = Result<(D, Vec<D::Item>), D::Error>>>>,
},
Now, inside the implementation of Stream change the matching arm for Begin to the following.
// This state only occurs at the entry of the state machine. It only holds the
// `PaginationDelegate` that will be used to update the offset and make new requests.
Begin { delegate } => {
self.set(Pending {
// Construct a new future that awaits the result and has a new type for `Output`
// that contains both the result and the moved delegate.
// Here the delegate is moved into the future via the `async` block.
future: Box::pin(async {
let result = delegate.next_page().await;
result.map(|items| (delegate, items))
}),
});
// Return the distilled verson of the new state to the callee, indicating that a
// new request has been made and we are waiting or new data.
Poll::Pending
}
The compiler knows that that async block is really async move, you could be more explicit if you wanted. This effectively moves the delegate into the stack frame of the future that is boxed and pinned, ensuring that whenever the value is moved in memory the two values move together and the pointer cannot be invalidated.
The other matching arm for Pending needs to be updated to reflect the change in signature. Here is a complete implementation of the logic.
// At some point in the past this stream was polled and asked the delegate to make a new
// request. Now it is time to poll the future returned from that request that was made,
// and if results are available, unpack them to the `Ready` state and move
// the delegate. If the future still doesn't have results, set the state
// back to `Pending` and move the fields back into position.
Pending { mut future } => match future.as_mut().poll(ctx) {
// The future from the last request returned successfully with new items,
// and gave the delegate back.
Poll::Ready(Ok((mut delegate, items))) => {
// Tell the delegate the offset for the next page, which is the sum of the old
// old offset and the number of items that the API sent back.
delegate.set_offset(delegate.offset() + items.len());
// Construct a new `VecDeque` so that the items can be popped from the front.
// This should be more efficient than reversing the `Vec`, and less confusing.
let mut items = VecDeque::from(items);
// Get the first item out so that it can be yielded. The event that there are no
// more items should have been handled by the `Ready` branch, so it should be
// safe to unwrap.
let popped = items.pop_front().unwrap();
// Set the new state to `Ready` with the delegate and the items.
self.set(Ready { delegate, items });
Poll::Ready(Some(Ok(popped)))
}
// The future from the last request returned with an error.
Poll::Ready(Err(error)) => {
// Set the state to `Closed` so that any future polls will return
// `Poll::Ready(None)`. The callee can even match against this if needed.
self.set(Closed);
// Forward the error to whoever polled. This will only happen once because the
// error is moved, and the state set to `Closed`.
Poll::Ready(Some(Err(error)))
}
// The future from the last request is still pending.
Poll::Pending => {
// Because the state is currently `Indeterminate` it must be set back to what it
// was. This will move the future back into the state.
self.set(Pending { future });
// Tell the callee that we are still waiting for a response.
Poll::Pending
}
},

Why are the functions called several times within the FlatMaps?

I have the following code:
func myFunction() -> AnyPublisher<MyObject, MyError> {
return self.globalPublisher
// FlatMap 1
.flatMap { value1 -> AnyPublisher<Object1, Error1 > in
return self.function1(value1)
}
// FlatMap 2
.flatMap { value2 -> AnyPublisher<Object2, Error2 > in
return self.function1(value2)
}
// FlatMap3
.flatMap { value3 -> AnyPublisher<Object3, Error3 > in
return self.function1(value3)
}
.eraseToAnyPublisher()
})
}
myFunction has only one subscriber (checked with debug). globalPublisher can be triggered multiple times and at any time. This triggers the whole flatMap logic.
When the globalPublisher is triggered for the first time, everything is fine: every function in every flatMap block is called once. But the second time happens something strange. The globalPublisher is triggered only once. The function in FlatMap 1 is also triggered only once and returns only one value (checked with debug). But the function in FlatMap 2 is suddenly triggered twice and returns two values. The function in FlatMap 3 is then triggered 6 times.
The same thing happens for the third and further times: globalPublisher and the function in the FlatMap 1 are triggered once and the function1() returns only one value. The rest is triggered several times and the number of triggers is getting bigger and bigger.
Could someone tell me what could be the reason for such strange behavior of FlatMaps? I have already gone through my code several times and debugged it. Actually, everything must work. I suppose it's possible that the global publisher is storing somehow the "subscriptions" of the FlatMaps? But I don't think it works that way. Do you have some ideas?
I suspect that the problem lies in the combination of a global publisher and all the FlatMaps.
Thanks in advance.
I see two options here.
Either
So flatmap has a maxPublishers parameter check here. The parameter is set default to .unlimited which is controlling how many publishers the method can accept.
That's why your flatmaps are publishing these values unlimited. You need to change the parameter and set it to something like flatMap(maxPublishers: .max(1))
Or
You can use switchToLatest which will always use the most recent provided publisher.

What's the best way to display data when user input changes in Meteor (datepicker)?

I don't know how else to word this, but basically I'm implementing a datepicker, so the user can choose the range for which the data is displayed. The user picks a start data and ending date and with that, I re-run a gigantic function that is located in the lib folder to re-run and change all the data that is displayed via Meteor helpers on the main page.
The dates that the user picks are stored in Session variables, which are accessed in the function that I intended. The function runs, but no changes are displayed on client (but these changes are true in the console and I can see the changes being made via console.log statements I have throughout the function).
This is what the datepicker's onRendered function looks like:
Template.dashboard.onRendered(function(){
// Date picker
$(function() {
function cb(start, end) {
$('#reportrange span').html(start.format('MMMM D, YYYY') + ' - ' + end.format('MMMM D, YYYY'));
var startDate = start.format('MMMM D, YYYY');
Session.set("startingDate", startDate)
var endDate = end.format('MMMM D, YYYY');
Session.set("endingDate", endDate);
}
var firstDate = dates[0];
var lastItem = dates.length-1;
var lastDate = dates[lastItem]
cb(moment(firstDate), moment(lastDate));
$('#reportrange').daterangepicker({
ranges: {
'Last 7 Days': [moment().subtract(6, 'days'), moment()],
'Last 30 Days': [moment().subtract(29, 'days'), moment()],
'This Month': [moment().startOf('month'), moment().endOf('month')]
}
}, cb);
});
});
The Tracker.autorun:
Tracker.autorun(function(){
libFxn();
});
libFxn() is the rather large function in the lib folder that I call in the Tracker. So, whenever one of the Session variable changes due to user input, the Tracker.autorun fires and function is run and values are being changed, which I am able to see via console. However, on client, I don't see the changes.
That leaves me in a dilemma: I need to show the user the resulting data changes based on the input, but:
1) Changes are not seen in client, even though function in lib folder is being executed.
2) I can't use document.location.reload(true); or refresh the page in any way because when the page refreshes, the session variables are restored to default values (which is first date and last date of the dates array that I have on hand).
So I need to figure out a way to send the user's date input data to the function in the lib folder in a way that will show the changes in the client/template that doesn't involve Sessions if the page has to be refreshed.
If anyone can give me hints or tips, I would be grateful.
Here is an example of one helper, which is basically identical to all others minus the different variables it calls (all these variables are in the libFxn() function and are populated there and called via these helper functions):
WinRate: function(){
if(Number((((wins/gamesPlayed))))){
return numeral((wins/gamesPlayed)).format('0%');
} else if(Number((((wins/gamesPlayed)))) === 0){
return "0%"
} else{
return "n/a";
}
}
From comments above you are not making the variables themselves reactive. You can do this using the Tracker.Dependency.
In your lib file you will want to use globalsDep = new Tracker.Dependency; or similar, you will probably want to have one for each type of outcome from your function i.e. if you can modify 10 variables independently then you will want 10, a new dependency for each one otherwise you will re-run every helper that depends on them each time any value changes. if you want everything to re-run of course just use one:
globalsDep = new Tracker.Dependency;
Each place you modify the relevant variable (or at the end of your function if you only want one dependency) you need to tell the dependency that it has become invalid and needs to recompute
globalsDep.changed();
And then in each of the helpers you want to rerun you call the depends function:
globalsDep.depends()
And you should see them running straight away in the view. Simple example below:
/****************
** In Lib File **
****************/
globalsDep = new Tracker.Dependency;
xDep = new Tracker.Dependency;
x = 15;
y = 10;
t = 0;
myBigLongFunction = function(){
x = x + 5;
y = y + 1;
t = x + y;
console.log('Changing Values', x, y, t);
globalsDep.changed();
if (x > 20)
xDep.changed();
}
/****************
** In JS File **
****************/
Template.main.helpers({
testGlobalReactive: function(){
globalsDep.depend();
console.log('All vars rerun');
return {t:t, x:x, y:y};
},
testXReactive: function(){
xDep.depend();
console.log('X rerun');
return x;
}
});
/****************
** In HTML File **
****************/
<template name="main">
<div style="min-height:200px;width:100%;background-color:lightgrey;">
{{#with testGlobalReactive}}
X:{{x}}<br><br>
Y:{{y}}<br><br>
T:{{t}}<br><br>
{{/with}}
X Individual: {{testXReactive}}
</div>
</template>
Although I would caution against having client state in this way, you would be better leveraging the reactivity of collections and ensuring everything is synched with the server through them, having this sort of data stored on the client will not be persistent anywhere and cannot be trusted in any manner as client can modify global variables at will. If you are already setting this data from collections in the function ignore the last but you may still want to consider accessing the data either in iron router data field or at a template level direct from collection as it will be reactive by default without need for the Tracker.dependency :D

Meteor How to block a method call before the first one is finished?

I have the following scenario:
Client side has a button clicking it will execute Meteor.call method on the server-side which will call API and fetch products, During this time I wan't to disable this button + block this method from executing again basically nothing stops you from clicking the button 100x times and server will keep on executing same method again and again.
Few ideas I had in my mind: Use sessions to disable button (Problem: can still using the console Meteor.call and abuse it)
I also looked at Meteor.apply in the docs with wait:true didn't seems to stop from method execution. I honestly not sure how this kind of thing is handled with no hacks.
Client-side:
'click .button-products': function(e){
Meteor.call('getActiveProducts', function(error, results){
if (error)
return Alerts.add(error.reason, 'danger', {autoHide: 5000});
if (results.success)
return Alerts.add('Finished Importing Products Successfully', 'success', {autoHide: 5000});
})
}
Server-side
Meteor.methods({
getActiveProducts: function(){
var user = Meteor.user();
var api = api.forUser(user);
importProducts = function(items){
nextPage = items.pagination.next_page;
items.results.forEach(function(product){
var sameproduct = apiProducts.findOne({listing_id: product.listing_id});
if (sameproduct) {
return;
}
var productExtend = _.extend(product, {userId: Meteor.userId()});
apiProducts.insert(productExtend);
});
};
var products = api.ProductsActive('GET', {includes: 'Images', limit: 1});
importProducts(products);
while (nextPage !== null) {
products = api.ProductsActive('GET', {includes: 'Images', page: nextPage, limit: 1});
importProducts(products);
}
return {success: true};
}
});
From the Meteor docs:
On the server, methods from a given client run one at a time. The N+1th invocation from a client won't start until the Nth invocation returns. However, you can change this by calling this.unblock. This will allow the N+1th invocation to start running in a new fiber.
What this means is that subsequent calls to the method won't actually know that they were made while the first call was still running, because the first call will have already finished running. But you could do something like this:
Meteor.methods({
getActiveProducts: function() {
var currentUser = Meteor.users.findOne(this.userId);
if (currentUser && !currentUser.gettingProducts) {
Meteor.users.update(this.userId, {$set: {gettingProducts: true}});
// let the other calls run, but now they won't get past the if block
this.unblock();
// do your actual method stuff here
Meteor.users.update(this.userId, {$set: {gettingProducts: false}});
}
}
});
Now subsequent calls may run while the first is still running, but they won't run anything inside the if block. Theoretically, if the user sends enough calls, the first call could finish before all of the others have started. But this should at least significantly limit the number of etsy calls that can be initiated by a user. You could adapt this technique to be more robust, such as storing the last time a successful call was initiated and making sure X seconds have passed, or storing the number of times the method has been called in the last hour and limiting that number, etc.
A package I wrote a while back might come in handy for you. Essentially it exposes the Session api on the server side (hence the name), meaning you can do something like ServerSession.set('doingSomethingImportant', true) within the call, and then check this session's value in subsequent calls. The session can only be set on the server, and expires upon connection close (so they could spam calls, but only as fast as they can refresh the page).
In the event of error, you can just reset the session. There shouldn't be any issues related to unexpected errors either because the session will just expire upon connection close. Let me know what you think :)

Loading data asynchronously in ember-data

I'm writing an application based on ember-data, it loads up all of its data asynchronously. However, the didLoad function does not get called until find is used. For example:
App = Ember.Application.create();
App.Store = DS.Store.create({revision: 3});
App.Thing = DS.Model.extend({
didLoad: function(){
alert("I loaded " + this.get('id'));
}
});
App.Store.load(App.Thing,{id: "foo"});
...will not trigger the alert, and findAll will not return the model. However, when I run:
App.Store.find(App.Thing,"foo");
The didLoad function will trigger, and it can be found with App.Store.findAll(App.Thing).
What's going on?
The ember-data source code explains it well:
// A record enters this state when the store askes
// the adapter for its data. It remains in this state
// until the adapter provides the requested data.
//
// Usually, this process is asynchronous, using an
// XHR to retrieve the data.
loading: DS.State.create({
// TRANSITIONS
exit: function(manager) {
var record = get(manager, 'record');
record.fire('didLoad');
},
// EVENTS
didChangeData: function(manager, data) {
didChangeData(manager);
manager.send('loadedData');
},
loadedData: function(manager) {
manager.goToState('loaded');
}
}),
this means that 'didLoad' will only be triggered when the record was loaded via the adapter.
The 'find' method asks the adapter for the data - this looks it up in the pool of currently available data hashes and in your case finds it, because you already provided it. In other cases however the data maybe does not exist locally in the browser but remain on the server, which would trigger an ajax request in the adapter to fetch it.
So 'didLoad' currently only works in combination with an adapter (e.g: find)
But I totally agree with you that this should be changed since triggering 'didLoad' on models that are loaded vai Store.load seems pretty obvious ;-)

Resources