I had made new menu for my app and I have made A/B testing to optimize my revenue. I have set ad_impression as a goal.
I see in A/B testing console that new menu is worse for ad_impression:

I have logged those two group of users with User Properties too and I have notices that my revenue is better with new ("card") menu:


Does A/B test just say me what version is better for at least one ad_impression for user? How can I test for cumulated amount of event occurrences (A/B test doesnt take into account that with new version more users will use the app in the long run and so on)? If so, testing for ad_impressions and other ad events are nearly pointless for my case. Do you have any plans to add the option to see cumulated amount of event occurrences like in funnels and optimize for revenue?
Related
We use Firebase A/B test product for our mobile apps. We need to reach the parameters of our events and make a deeper analyze. We have worked with BigQuery before for this, but it requires a lot of effort.
Let me tell you briefly about our problem:
Let's say we have an event called add_to_cart. We want to look at the number of times the add_to_cart is triggered from a specific screen in the A/B test results. For example, those whose firebase screen class is category_page. This data can be accessed by writing a query over BigQuery, but create extra effort for different needs.
Is there a short way or tool about doing analysis by event parameters?
As we find Firebase's reporting and analysis insufficient, we will decide to use a different tool. If anyone encounters such a problem, it is possible to make a deep analysis through BigQuery.
Another way you can use Audience as a hacky way.
1. Go to Custom Definitions section and create a custom definition.
Your scope should be "User". Select firebase_exp_<N> as the User property. Because Firebase defines a property for each user it adds to the experience. You can find the <N> number from the link on your A/B test page.
E.g. your A/B test link is like: https://console.firebase.google.com/u/0/project/your-project/config/experiment/results/20. The <N> number is 20 and user property is firebase_exp_20.
2. Create Audience for each control group
Create a new audience according to this created dimension value. A value of 0 corresponds to Baseline. Each control group after that continues with consecutive numbers. (1,2,3..)
3. Go to Analytics
Go to Analytics and do your analysis for each control group with these Audiences.
I hope it helps.
Is it possible on firebase A/B tests to monitor the event counts instead of the conversion rates?
For example, I would like to know if users in Variant A trigger a certain event more times than the Baseline (not only if they trigger the event or not, but how many times they trigger it).
Thanks!
Oriol
That's a really good question. For each event that is triggered, you should be able to derive event count as well as an event DAU metric. Firebase has limited capability to run such analysis. One way to do this is to download the data and run a manual analysis. Or use a tool like Statsig that allows you compute these metrics automatically. Here's a screenshot of what you get for every A/B experiment, and you can see how the tool breaks down Event Count and Event DAU for each metric.
Disclaimer: I work at Statsig.
As some other questions pointed out if you're setting up a Remote Config based AB test there's no activation event based on user first opened.
We want to AB test our new onboarding flow against the previous onboarding experience however without a startup trigger we're not sure how to properly create this experiment.
One SO answer talks about sending a custom activation event with a timestamp and then filtering the test participants by that timestamp e.g. custom_first_open > 1234567... however the onboarding flow is the first thing the user is to see.
From my understanding as soon as the user initializes their remote config they will be subscribed to any active experiments. We would have to send the custom event before initialization and it would have to be immediately available to the AB test. AB test data and Firebase events both seem to be very slow to register (hours to days) so I doubt it would properly configure the user for the onboarding test using this trick.
Is there another way to use AB testing to test onboarding efficacy only against new users?
There are a couple of ways to go about this.
First, when you create the experiment you can limit the experiment targeting to only include users on a new version or build of the app -- or country, etc.
example targeting
You can also only target users in an Audience you define, which give you pretty flexible abilities to define whatever group you'd like to roll the tests out to.
creating an audience
Note - we tend to recommend you use first_touch_timestamp for correctly identifying new users. Better than first_open.
Also, outcomes are easier to measure when you're looking at ARPU/LTV outcomes
I would like to measure how long (on average) users are performing certain actions in my app. For example, the time it takes for a user to add an item to the cart till the time to purchase the items. Can Firebase analytics track these time differences? If so, how can I get a report out of it or add it to my dashboard.
I know this can be done using traces in Performance monitoring, but I want to know these time differences not to troubleshoot performance issues but rather behavioral issues for my users.
Question about goal tracking.
I have an application that I can track from Step 1 to Step 10, and the user submits data on step 11.
Pretty standard form.
We then intake the application, evaluate, then send the user and email to log back in and accept our terms and purchase the product, which are steps 12-15.
If I set up a goal funnel, will analytics know the users has come back and completed steps 12-15?
Or do I basically lose all tracking after step 11?
There are some limitations in GA that could be a bit troublesome:
30-min session window: if user comes back later, new session is initiated.
multiple browsers/devices: if finishing the last 3 steps after receiving your email is done on different laptop/mobile device, the user will have different ClientID and so you won't be able to stitch the data together (in GA reporting interface).
There are many more aspects to it - but if the two limitations listed above doesn't seem like relevant, then the implementation shouldn't be tat difficult. For more technical details, I'd consider using MeasurementProtocol (fancy name for server-side tracking).