User appearing in two mutually exclusive experiments in firebase a/b testing - firebase

In Firebase for iOS I’m running two tests (#2 and #3) with the same target action, 'do N'.
I have created an Audience "Fans" to use as a condition.
Fans = users who have done N at least once in the previous app version.
Test#2 targets 100% users in the audience "Fans".
Test#3 targets 100% users NOT in the audience "Fans".
From the docs we know that users are attached to audiences permanently
Specifically, users become permanent members of an audience after they are assigned to it.
So I would expect that users in the test#2 and test#3 do not intersect. However, bigquery shows that sometimes an event "do N" has corresponding user property as "firebase_exp_2" and "firebase_exp_3" at the same time – https://www.dropbox.com/s/2yyqcelbf8dryvc/Screenshot%202018-06-09%2017.49.33.png
How can this be possible?
Moreover, the remote config options are not the same for these experiments. How do I know, which variant a user actually has?
Thanks in advance

As far as I know, you should now use user audiences in a/b test as it takes a lot of time for a user to become a part of audience. Also in your case, a user can be in no audience (and be part of test2) and then become a member of your fans (and so be part of test1). You should use user properties for a/b tests, they work faster and also you can configure a property like "times_user_have_done_N" and target users who have done it > or < than x

Related

How can I easily analyze Firebase A/B test results with event parameters?

We use Firebase A/B test product for our mobile apps. We need to reach the parameters of our events and make a deeper analyze. We have worked with BigQuery before for this, but it requires a lot of effort.
Let me tell you briefly about our problem:
Let's say we have an event called add_to_cart. We want to look at the number of times the add_to_cart is triggered from a specific screen in the A/B test results. For example, those whose firebase screen class is category_page. This data can be accessed by writing a query over BigQuery, but create extra effort for different needs.
Is there a short way or tool about doing analysis by event parameters?
As we find Firebase's reporting and analysis insufficient, we will decide to use a different tool. If anyone encounters such a problem, it is possible to make a deep analysis through BigQuery.
Another way you can use Audience as a hacky way.
1. Go to Custom Definitions section and create a custom definition.
Your scope should be "User". Select firebase_exp_<N> as the User property. Because Firebase defines a property for each user it adds to the experience. You can find the <N> number from the link on your A/B test page.
E.g. your A/B test link is like: https://console.firebase.google.com/u/0/project/your-project/config/experiment/results/20. The <N> number is 20 and user property is firebase_exp_20.
2. Create Audience for each control group
Create a new audience according to this created dimension value. A value of 0 corresponds to Baseline. Each control group after that continues with consecutive numbers. (1,2,3..)
3. Go to Analytics
Go to Analytics and do your analysis for each control group with these Audiences.
I hope it helps.

How to firebase AB test new user onboarding?

As some other questions pointed out if you're setting up a Remote Config based AB test there's no activation event based on user first opened.
We want to AB test our new onboarding flow against the previous onboarding experience however without a startup trigger we're not sure how to properly create this experiment.
One SO answer talks about sending a custom activation event with a timestamp and then filtering the test participants by that timestamp e.g. custom_first_open > 1234567... however the onboarding flow is the first thing the user is to see.
From my understanding as soon as the user initializes their remote config they will be subscribed to any active experiments. We would have to send the custom event before initialization and it would have to be immediately available to the AB test. AB test data and Firebase events both seem to be very slow to register (hours to days) so I doubt it would properly configure the user for the onboarding test using this trick.
Is there another way to use AB testing to test onboarding efficacy only against new users?
There are a couple of ways to go about this.
First, when you create the experiment you can limit the experiment targeting to only include users on a new version or build of the app -- or country, etc.
example targeting
You can also only target users in an Audience you define, which give you pretty flexible abilities to define whatever group you'd like to roll the tests out to.
creating an audience
Note - we tend to recommend you use first_touch_timestamp for correctly identifying new users. Better than first_open.
Also, outcomes are easier to measure when you're looking at ARPU/LTV outcomes

How to create a Firebase Audience to ask for an App Store Review/Rating

I want to create a Firebase Audience to ask to rate/review my app.
The condition I would like to have for a user to fit into the above audience is: a user who has opened the app at least 10 times, over the course of 3 distinct days.
Is it possible to create an audience with this condition?
I am open to suggestions to change/improve the condition. Or even a completely different condition that will achieve the same goal.
You can create a custom audience, choose events as condition and pick session_start.
Than you can choose additional options like the number of session and period.
This does not guarantee you 3 distinct days though. But by default Firebase will only count a new session every 30min. So most of those users have had their session over 3 days anyways. Firebase also gives you a preview of the audience size. So you can easily check how many users would be in that audience with a period of e.g. one day.
In general i would recommend asking users for rating who had a positive experience within your app. Completing certain action etc. and use that event for the audience.

When do users become exposed to a Firebase A/B test when no activation event is set?

I'm running an A/B Test targeting 100% of iOS users with specific versions using a regex to match versions 2.1.27 and up, here's the regex in case it's relevant:
2\.1\.([3-9].|2[7-9])
I did not set any activation event for the experiment; meaning I left that field blank.
Now my question is: Who becomes part of the experiment? Anyone who opens the app with a matching version? Anyone who starts a session with a matching version? Anyone who engages with a matching session?
So far, it says the total number of users that have been exposed to the experiment is 9,6K. I'm trying to set up a funnel that only includes these users but I can't figure out one that shows me anything close to that number in the date range of the experiment.
Firebase Support says:
Firebase will determine if a user will be part of the experiment when your app do the activatedFetch method.
That's the answer I was looking for.

Is it possible for an Alfresco workflow to be used for "anonymous planning poker"?

We're investigating Alfresco for doing wideband delphi ("planning poker") based on submitted statements of work (collected user stories). I've been reading through the Alfresco documentation, and there are two questions that I haven't been able to get clear answers to:
Can we set it up so users can write, but not read, to a folder or node? (To support "anonymous" planning, without users knowing what the other users submitted estimates were)
Can workflow tasks be implemented to ask users to comment or submit items to a node or director with the above model, rather than just simple approve or deny?
Workflow:
User submits a statement of work
All users (or selected users at random, or ... ) in group get notice to review
Reviews include estimates on the overall SOW or specific phases
Reviews are anonymous/secret to all but the manager
Have you implemented something similar in Alfresco with fine grained access control? Sharing your experience would be very helpful... i'm not looking for someone to do the work for me, just to confirm it can be done.
I would use some kind of parallel workflow for this.
First the managers starts the workflow and the task type of this first node will have additional info about the user story and such, then the manager selects a people or a group to which it will send this user story.
Here comes the parallel thing into play. Because it's parallel noone sees the results of the other members of the workflow. The members fill in the requested fields (another custom task type with data like: score (estimate) and maybe explanation.
Before the workflow goes back to manager the automatic calculations are made in a non-user task/node where you calculate overall score for the story. You can include each individual user and their score in the result/report if necessary.
Now the results are sent to the manager.

Resources