What is the scope of the VAR dataset? - aviarc

If Workflow1 shows Screen1 then Screen1 calls Workflow2, can i use the var dataset in Workflow2 from Screen1 or does Workflow2 re-initialise the var dataset for its own use?
Is there any documentation on the var dataset?
I’m just re-doing my code for the screen validation but it’s not working because i suspect the var dataset has been reset and the values from Screen1 are not available any more.

Can't find it in the documentation at the moment, but the behavior you describe is expected.
Each workflow which starts with <workflow> element creates a new scope (see here) and VAR dataset. This datasets is only visible to the screen(s) which is relative to it. If the screen in its case calls another workflow it will create its own VAR dataset and shadow any other already existing.
In your example the setup is as follows:
--> Entry to the Workflow1
Datasets:
var
... (any other declared dataset)
--> Show screen Screen1
Visible Datasets:
var
... (any other declared dataset)
--> Call Workflow2
Datasets:
var (this is a new clean dataset which does not have any
relationship to the VAR dataset created in Workflow1)
... (any new datasets)
... (any datasets declared in Workflow1, given that there were
no new datasets declared in Worklow2 with the same name)
Therefore your reasoning seems to be correct and you'll need to create some other dataset if you want data to be available across the workflows.

Related

Is there any way to get the c# object/data on which NUnit test is failing?

I am writing unit tests for a complex application which has so many rules to be checked into a single flow by using NUnit and Playwright in .Net5. Actually the case is, to save the time for writing the test scripts for Playwright (front-end testing tool), we have used a library named Bogus to create dummy data dynamically based on the rules (because the test cases has numerous rules to be checked and it was much more difficult to write fresh data to every case). I am using Playwright script into the NUnit test and providing the data source by using [TestCaseSource("MethodName")] to provide dynamic data object for different cases.
Now, we are facing a problem that some of the tests cases get passed and some are failed and we are unable to identify that particularly which test case is causing the problem because the testcase data is being provided by the dynamic source and in that source the data is being generated by the Bogus library on the bases of the rules which we have generated. Plus, we cannot look at the tests for a long time that's why we have automated the process.
[Test]
[TestCaseSource("GetDataToSubmit")]
public async Task Test_SubmitAssignmentDynamicFlow(Assignment assignment)
{
using var playwright = await Playwright.CreateAsync();
await using var browser = await playwright.Chromium.LaunchAsync(new BrowserTypeLaunchOptions
{
Headless = false,
...
});
....
private static IEnumerable<TestCaseData> GetDataToSubmit()
{
//creating data for simple job
var simpleAssignment = new DummyAssigmentGenerator()
....
.Generate();
yield return new TestCaseData(simpleAssignment);
....
Now, my question is, is there any way so that we can view that what were the actual values in the object in the failed case, when we see the whole report of the testcases? So that we can come to know that which certain values are causing problems and eventually fixed those.
Two approaches...
Assuming that DummyAssignmentGenerator is your own class, override its ToString() method to display whatever you would like to see. That string will become part of the name of the test case generated, like...
Test_SubmitAssignmentDynamicFlow(YOUR_STRING)
Apply a name to each TestCaseData item you yield using the SetName() fluent method. In that case, you are supplying the full display name of the test case, not just the part in parentheses. Use {m}(YOUR_STRING) in order to have it appear the same as in the first approach.
If you can use it, the first approach is clearly the simpler of the two.

Getting numChildren for large data sets

I have a node that can potentially have tens of thousands of Children, I need to be able to just fetch the number of children it has without downloading the whole data.
From what I understand using the on('value') function and using DataSnapShot.numChildren(), will cause the whole data of that node to be downloaded first before counted.
Using on('value') would indeed download the entire node. But you can use the shallow feature of their REST API to download only the keys, which you can then count.
curl 'https://samplechat.firebaseio-demo.com/.json?shallow=true&auth=CREDENTIAL'
I do not think that Firebase currently has an operation to do that. You could always keep a property under the parent object that reflects the number of children and update it atomically upon the addition of a new child.
For example (in Javascript):
var ref = new Firebase({base_url}/{parent_id}/children_count);
ref.transaction(function(currentVal){
return (currentVal || 0) + 1;
});

Limiting Children of Object in Firebase

I am looking to get back my whole object, but limit one of my children objects.
For example, say you take a chat app like firebase does and you do "rooms".
So you might have
rooms: {
mainroom:{
name: something,
otherAttrs: mfasfd,
messages: {
0: {
message: something
},
1: {
message: something else
}
}
}
I may have 300 messages in that mainroom, but I want to limit it to 30 say. This example is basic, but in my actual application my objects are very related so I don't want to denormalize any further.
I could do a mainroom call, and then do another child call off of that, but I am wondering if I would get dinged twice. in the initial call it would load all messages anyways, and then I would load 30 of them with the child call. Was just hoping someone would have a better recommendation.
Start by reading up about denormalization. This is a concept which is enforced in SQL by table structures, but also important in NoSQL, although you're given enough rope to tangle yourself up and have a bad day.
So the first step is to split messages into its own path:
URL/rooms
URL/messages
Now you can grab your meta data and messages separately, and call limit to set the number loaded:
var fbRef = new Firebase(URL);
var roomRef = fbRef.child('rooms/'+roomId);
var chatRef = fbRef.child('messages/'+roomId).limit(30);
In case you're not convinced that these should be split up, you're going to run into this same issue when you want to create a dropdown containing a list of room names (you have to load all your messages in the current data structure, just to get the room names).
For great justice, split meta data and detailed records into their own paths. Otherwise, all your base are belong to bandwidth.

Databinding with a large amount of values and getter methods?

Reading through Misko's excellent answer on databinding here: How does data binding work in AngularJS?, I am wondering how Angular does it's dirt-checking behind the scenes, because:
I'm creating an app, that prints a large amount of Car objects to the DOM, each Car looking something like this:
var Car = function(settings) {
this.name = settings.name;
+ many more properties...
}
Car.prototype = {
calcPrice: function() { ... },
+ many more methods...
}
$scope.cars = [lots of Cars];
The linked answer above mentions a limit of around 2000 values that can be provided through databinding when printed in the DOM, and due to the large amount of properties on each Car object, this number could very easily be exceeded in this app when looping through the cars array.
Say you end up having 2000+ values printed in the DOM through databinding, and one of these values updates, does it affect Angular's dirt-checking performance that 2000 values are present, or does Angular somehow flag the values that change, so it only looks at the changed values when running its $digest()? In other words, does it matter that you have a lot of databound values, when only a very small number of these are likely to be updated after the initial print?
If it does matter, -- and since most of the values are read-only -- is there some way to use the databinding syntax {{car.prop}} to get the value to the DOM once and then tell Angular to not bind to them anymore
Would it make a difference to add getter-methods to the Car object and provide it's properties like this {{car.getProp()}} ?
I had the same kind of problem with an application I was working on. Having a huge data set is not a problem, the problem comes from the bindings,ng-repeats in particular killed performances.
Part of the solution was removing "dynamic" bindings with "static" bindings using this nice library: http://ngmodules.org/modules/abourget-angular.

Meteor.deps.Context and Invalid Collection of Documents

What's the difference with the following two blocks of code? The top works as expect, but the bottom does not.
// Initially outputs 0, but eventually outputs the # of players.
Meteor.autorun(function() {
var players = Players.find();
console.info(players.count());
});
// Outputs 0 twice. Why does this not work like the block above?
var players = Players.find();
Meteor.autorun(function() {
console.info(players.count());
});
I'm testing this in the leaderboard example, within the Meteor.isClient block.
Thank you,
Andrew
While Meteor is reactive you need to make your query within a reactive context a.k.a the Meteor.autorun. The reactive contexts are: Template, Meteor.autorun, Meteor.render and Meteor.renderList.
In the second case var players = Players.find(); is run while Meteor is starting up, and contains the data it got while querying at that time, while starting up.
In the first you've placed the query in a reactive context. Which is recalled and rerun whenever there is a data update of a sort. In the second case it doesn't get a chance to rerun the query it it remains with the data contained while the browser just loaded the page up.
While Meteor is reactive you still need to re query the data within the reactive context.

Resources