In NoFlo I often come across components like this:
noflo = require 'noflo'
class Foo extends noflo.AsyncComponent
constructor: ->
#inPorts = new noflo.InPorts
main:
datatype: 'int'
description: 'Main async input'
required: true
sup1:
datatype: 'string'
description: 'Supplementary input #1'
required: true
sup2:
datatype: 'int'
description: 'Supplementary input #2'
required: true
#outPorts = new noflo.OutPorts
out:
datatype: 'object'
description: 'Result object'
error:
datatype: 'object'
#sup1 = null
#sup2 = null
#inPorts.sup1.on 'data', (#sup1) =>
#inPorts.sup2.on 'data', (#sup2) =>
super 'main', 'out'
doAsync: (main, callback) ->
unless #sup1 and #sup2
return callback new Error "Supplementary data missing"
# Combine data received from different sources
result =
main: main
sup1: #sup1
sup2: #sup2
# Reset state until next iteration
#sup1 = null
#sup2 = null
# Send the result
#outPorts.out.send result
#outPorts.out.disconnect()
callback()
exports.getComponent = -> new Foo
It assumes that all 3 input connections are synchronized somehow despite the fact that network consists mostly of async components. Consider this situation: Foo waits for main to come and receives sup1 and sup2 packets, but then next sup1 packet arrives which should be combined with next main, while still waiting for previous main to come. The result will be a complete mess on more or high data throughput.
Do NoFlo async components have any means of data races protection or it is all up to component designers?
There are 2 sides of the problem here: synchronizing inputs and maintaining internal state. Internal state is more or less protected here by that fact Node.js is not multithreaded and nothing will try to access state variables until previous doAsync() handler finishes. But syncing inputs is still a question.
As it turns out for NoFlo v0.5.1, there are no built-in aids against data races and component designers have to care about it themselves.
For asynchronous components that means:
If data from multiple inports is required, make sure it arrives from all the ports before processing. See Components & Ports toolbox.
Reset component's state from iteration to iteration to ensure no "memory" side effects take place.
Protect component's internal state from a data race by preceding it with a Throttle component and connecting component's LOAD outport with Throttle's LOAD inport.
With recent version of NoFlo, the recommended way is to use noflo.helpers.WirePattern, and syncronize using groups.
Related
I use Redux Toolkit, and in particular the new listener api, to perform tasks similar to what I could do with Redux-Saga.
Unfortunately, since a few days, I'm stuck with a memory leak and I can't find the cause.
I have reproduced a minimal example of the code that produces this memory leak, link to the example : https://github.com/MrSquaare/rtk-memory-leak
To observe this memory leak :
I use Chromium, DevTools memory tool
I trigger a garbage collector
I make a heap memory snapshot
I dispatch entity/load (via the UI button)
I make several heap memory snapshots every 2-3 seconds
I use the comparison tool, I notice that I have the array allocation size growing infinitely
And after dispatch entity/unload, then make a snapshot heap memory, we can observe that the allocations disappear...
Has anyone observed similar behavior? Or does anyone have an idea of the cause? Thanks!
EDIT 1:
I made an example with only the listener middleware (only-middleware branch), and compared it with different ways of doing :
With forkApi.pause : Important leaks, especially of the generated entities
Without forkApi.pause : I use directly api.dispatch, no more leaks of the generated entities, some leaks of other kinds, but maybe normal things (I am not qualified enough to pronounce on this)
Without api.dispatch : I call directly the function that generates an entity, same result as with api.dispatch
It seems that the leak is related to forkApi.pause, but again I am not qualified enough to know the real cause...
It's probably the promises.forEach. Every 1000ms, you create a bunch of new promises and schedule things for them. You never wait for the last batch of those promises to finish, so they accumulate.
Replace the promises.forEach with a await Promise.all(promises.map and see what that does.
After reading your solution more closely, I believe you can do this with less problems by sticking more to the reducer and less to the listenerMiddleware.
I would suggest these changes:
export const entitySlice = createSlice({
name: "entity",
initialState: entityAdapter.getInitialState({ acceptingEntities: false }),
reducers: {
upsertOne: (state, action) => {
entityAdapter.upsertOne(state, action.payload);
},
removeAll: (state) => {
entityAdapter.removeAll(state);
},
load(state) { state.acceptingEntities = true },
unload(state) { state.acceptingEntities = false },
},
extraReducers: builder => {
builder.addCase(getEntity.fulfilled, (state, action) => {
if (!state.acceptingEntities) return;
const prevEntity = entitySelectors.selectById(state.entity, id);
entityAdapter.upsertOne(state,
prevEntity
? mergeEntityData(prevEntity.data, action.payload.data)
: action.payload.data
)
})
}
});
and
entityMiddleware.startListening({
actionCreator: api.actions.load,
effect: async (action, api) => {
const task = api.fork(async (forkApi) => {
while (!forkApi.signal.aborted) {
for (const id of entityIds) {
api.dispatch(getEntity(id))
}
await api.delay(1000);
}
});
await api.condition(api.actions.unload.match);
task.cancel();
},
});
Generally:
logic like calculating a new value belongs into the reducer, not outside. Doing stuff like this outside always has the risk of race conditions and in the reducer you always have all the info available (also, no risk of hogging memory by holding stale value references)
dispatching another action directly after a thunk will only add more workload - after every reducer, every selector will rerun and your UI might rerender. Just go for an extraReducer from the start
I just added a boolean acceptingEntities to indicate if updates should currently take place or not
this massively reduces complexity in your listener
It may be related to use of Promise.race(): https://github.com/nodejs/node/issues/17469 . Filed https://github.com/reduxjs/redux-toolkit/issues/3020 for us to look at this further.
I'm trying to use SCC to write some contracts before I rebuild the producer side (there are no existing tests!). The examples around lists and deeper data structures in the documents are a bit thin, so I want to understand if this is feasible, or perhaps I have to drop down to calling a command to make the right assertions.
I'm using the latest version v2.11
So -
Given:
An API that will return a list of objects in its payload. The length of the list will depending on the identity of the client, i.e. client A will get 3 items, client B will get 4 items. The identity of the client isn't of interest here.
When:
A client makes a GET request, passing a querystring parameter for item selection within the list of items
Then:
I want to write a contract that takes input from request and proves that the response contains a list of objects, and the item that matches the selector has a boolean field selected:true, while the rest of the items have selected:false. There's an edge where the selector is wrong and no item is selected.
E.g. For the request GET /someapi?id=bbb
Response
{ foo: xxxy, bar: 123, baz: [
{ id: 'aaa', selected: false, .... },
{ id: 'bbb', selected: true, .... },
{ id: 'ccc', selected: false, .... }
] }
Of course the selected item can be anywhere in the list. So I had in mind an assertion like this pseudo code -
jsonPath('$.baz[?(#.id == fromRequest().query("id"))].selected', byEquality( true ) )
But I can't do that fromRequest() stuff in the JSONPath statement. Right now I guess I could simply have the whole response body as the spec, but that seems unwieldy. If it must be, that's fine I guess.
Any ideas or help appreciated.
I followed flow docs and typed redux action creators using union (https://flow.org/en/docs/react/redux/#toc-typing-redux-actions)
so I have a file with ALL the actions gathered into 1 union like in example:
type Action =
| { type: "FOO", foo: number }
| { type: "BAR", bar: boolean }
| { type: "BAZ", baz: string };
Action type is imported in my reducers and used as in exxample from docs:
function reducer(state: State, action: Action): State {
switch (action.type) {
case "FOO": return { ...state, value: action.foo };
case "BAR": return { ...state, value: action.bar };
default:
(action: empty);
return state;
}
}
The problem:
As I mentioned I gathered ALL the actions in one file - currently ~600 actions in one union. I noticed that lately flow server takes crazy time to start (100+ seconds), rechecking flow is also a pain if change is related to reducer. According to flow logs, files that contain reducers are marked as "Slow MERGE" - 15 to 45s.
After experimenting, I noticed that changing my Action type to any cuts the time from 100s to 9s.
The question:
can this be related to huge Action union?
should I split it into a few smaller types which will contain only actions to import in particular reducer or this is a wrong way to fix my issue?
It's probably more likely that this one action type is used across your entire app. Any time you make a change to it, Flow needs to recheck a very large number of files. One way to help mitigate this is to ensure all your union actions are in files of their own that don't import other files. Flow can get slow if it has "cycles". One type imports another time which then imports the first time. This can happen if, for example, you define your reducer actions in the reducers themselves. This causes a cycle. Instead, move your action types to their own file.
Additionally, you can use flow cycle to output a dot file you can then visualize this file in something like Gephi https://gephi.org/ to detect cycles.
We've been using SCORM in our previous e-learning 'engine' but we want to change the elements our Managed Learning Environment (MLE) tracks, namely each completable component in an e-learning module.
At runtime, we run the following code to set up our SCORM connection:
var vault = {}; //vault 'namespace' helps ensure no conflicts with possible other "SCORM" variables
vault.UTILS = {}; //For holding UTILS functions
vault.debug = { isActive: true }; //Enable (true) or disable (false) for debug mode
vault.SCORM = { //Define the SCORM object
version: null, //Store SCORM version.
handleCompletionStatus: true, //Whether or not the wrapper should automatically handle the initial completion status
handleExitMode: true, //Whether or not the wrapper should automatically handle the exit mode
API:{handle: null, isFound: false}, //Create API child object
connection: { isActive: false }, //Create connection child object
data: { completionStatus: null, exitStatus: null}, //Create data child object
debug:{} //Create debug child object
};
vault.SCORM.API.find('win');
vault.SCORM.connection.initialize();
if (vault.SCORM.data.get("cmi.core.lesson_status")=="not attempted") {
vault.SCORM.data.set("cmi.core.lesson_status" , "incomplete");
vault.SCORM.data.save();
}
There are many more functions in the SCORM.js file, but the point is this all works; When the module is loaded into our MLE, the following code triggers course completion:
vault.SCORM.data.set("cmi.core.lesson_status" , "completed");
So how would we register a completable component with SCORM? (Components in our 'engine' are jQuery objects usually called 'element'). Would something like the following work, or are custom calls in SCORM not possible?
vault.SCORM.data.set("cmi.interactions.n."+element.componentId() , "incomplete");
But then if I registered an interaction by specifying an id, as follows...
vault.SCORM.data.set("cmi.interactions.n.id", element.componentId());
...how do I then set or access 'completion' on that component?
I've been reading posts and pdf specs from various sites, but the explanations are sparse at best.
I know there aren't a lot of SCORM followers here, but if you have any info, I'd be keen to hear it.
FWIW, that's my pipwerks SCORM wrapper, but with the variable pipwerks changed to ncalt.
There is documentation on how to use my wrapper at http://pipwerks.com (search for "scorm wrapper" in the search field). The original source code can be found at https://github.com/pipwerks/scorm-api-wrapper.
Note your sample code is not using the wrapper the way it was intended to be used. For example, this:
ncalt.SCORM.data.set("cmi.core.lesson_status" , "completed");
should be this (data is an internal helper and not necessary):
ncalt.SCORM.set("cmi.core.lesson_status" , "completed");
You can shorten it even further via a reference variable, like so:
var scorm = ncalt.SCORM;
scorm.set("cmi.core.lesson_status" , "completed");
scorm.save();
scorm.get("cmi.core.lesson_status"); //returns "completed"
As for your 'components', if you'd like to use SCORM's cmi.interactions model, be sure you're using the correct syntax. The "n" in the SCORM documentation (cmi.interactions.n.id) is meant to represent a number, it's not a literal "n".
scorm.set("cmi.interactions.0.id", "myfirstinteraction");
scorm.save();
To retrieve data from that interaction, you need to specify the number in place of the n:
scorm.get("cmi.interactions.0.id"); //returns "myfirstinteraction"
Note the CMI data model doesn't provide a 'status' field for cmi.interactions. You'd need to use cmi.objectives.
scorm.set("cmi.objectives.0.status", "completed");
scorm.save();
scorm.get("cmi.objectives.0.status"); // returns "completed"
The CMI data model (as available in SCORM) is spelled out here: http://scorm.com/scorm-explained/technical-scorm/run-time/run-time-reference/
I'm running into an odd issue with a Backbone.js Model where an array member is being shown as blank. It looks something like this:
var Session = Backbone.Model.extend({
defaults: {
// ...
widgets: []
},
addWidget: function (widget) {
var widgets = this.get("widgets");
widgets.push(widget);
this.trigger("change:widgets", this, widgets);
},
// ...
// I have a method on the model to grabbing a member of the array
getWidget: function (id) {
console.log(this.attributes);
console.log(this.attributes.widgets);
// ...
}
});
I then add a widget via addWidget. When trying getWidget the result I get (in Chrome) is this:
Object
widgets: Array[1]
0: child
length: 1
__proto__: Array[0]
__proto__: Object
[]
It's showing that widgets is not empty when logging this.attributes but it's shown as empty when logging this.attributes.widgets. Does anyone know what would cause this?
EDIT
I've changed the model to instantiate the widgets array in the initialization method to avoid references across multiple instances, and I started using backbone-nested with no luck.
Be careful about trusting the console, there is often asynchronous behavior that can trip you up.
You're expecting console.log(x) to behave like this:
You call console.log(x).
x is dumped to the console.
Execution continues on with the statement immediately following your console.log(x) call.
But that's not what happens, the reality is more like this:
You call console.log(x).
The browser grabs a reference to x, and queues up the "real" console.log call for later.
Various other bits of JavaScript run (or not).
Later, the console.log call from (2) gets around to dumping the current state of x into the console but this x won't necessarily match the x as it was in (2).
In your case, you're doing this:
console.log(this.attributes);
console.log(this.attributes.widgets);
So you have something like this at (2):
attributes.widgets
^ ^
| |
console.log -+ |
console.log -----------+
and then something is happening in (3) which effectively does this.attributes.widgets = [...] (i.e. changes the attributes.widget reference) and so, when (4) comes around, you have this:
attributes.widgets // the new one from (3)
^
|
console.log -+
console.log -----------> widgets // the original from (1)
This leaves you seeing two different versions of widgets: the new one which received something in (3) and the original which is empty.
When you do this:
console.log(_(this.attributes).clone());
console.log(_(this.attributes.widgets).clone());
you're grabbing copies of this.attributes and this.attributes.widgets that are attached to the console.log calls so (3) won't interfere with your references and you see sensible results in the console.
That's the answer to this:
It's showing that widgets is not empty when logging this.attributes but it's shown as empty when logging this.attributes.widgets. Does anyone know what would cause this?
As far as the underlying problem goes, you probably have a fetch call somewhere and you're not taking its asynchronous behavior into account. The solution is probably to bind to an "add" or "reset" event.
Remember that [] in JS is just an alias to new Array(), and since objects are passed by reference, every instance of your Session model will share the same array object. This leads to all kinds of problems, including arrays appearing to be empty.
To make this work the way you want, you need to initialize your widgets array in the constructor. This will create a unique widget array for each Session object, and should alleviate your problem:
var Session = Backbone.Model.extend({
defaults: {
// ...
widgets: false
},
initialize: function(){
this.set('widgets',[]);
},
addWidget: function (widget) {
var widgets = this.get("widgets");
widgets.push(widget);
this.trigger("change:widgets", this, widgets);
},
// ...
// I have a method on the model to grabbing a member of the array
getWidget: function (id) {
console.log(this.attributes);
console.log(this.attributes.widgets);
// ...
}
});
Tested in a fiddle with Chrome and Firefox: http://jsfiddle.net/imsky/XBKYZ/
var s = new Session;
s.addWidget({"name":"test"});
s.getWidget()
Console output:
Object
widgets: Array[1]
__proto__: Object
[
Object
name: "test"
__proto__: Object
]