Firebase A/B Test Dashboard shows different numbers from BigQuery - firebase

I ran an A/B test experiment with remote config on firebase. After the completion of the experiment on the Remote Config dashboard I can see number of "Baseline" and "Variant A" users, however when I click on "Query experiment data" and use the query provided by firebase to count distinct number of users (user_pseudo_id or device.advertising_id doesn't matter) I see almost twice as big numbers compared to dashboard . Can you please explain this matter or give me some tips?
I ran a query to count distinct user_pseudo_ids and tournd out that I see twice as much compared to firebase A/B test dashboard

Related

How can I easily analyze Firebase A/B test results with event parameters?

We use Firebase A/B test product for our mobile apps. We need to reach the parameters of our events and make a deeper analyze. We have worked with BigQuery before for this, but it requires a lot of effort.
Let me tell you briefly about our problem:
Let's say we have an event called add_to_cart. We want to look at the number of times the add_to_cart is triggered from a specific screen in the A/B test results. For example, those whose firebase screen class is category_page. This data can be accessed by writing a query over BigQuery, but create extra effort for different needs.
Is there a short way or tool about doing analysis by event parameters?
As we find Firebase's reporting and analysis insufficient, we will decide to use a different tool. If anyone encounters such a problem, it is possible to make a deep analysis through BigQuery.
Another way you can use Audience as a hacky way.
1. Go to Custom Definitions section and create a custom definition.
Your scope should be "User". Select firebase_exp_<N> as the User property. Because Firebase defines a property for each user it adds to the experience. You can find the <N> number from the link on your A/B test page.
E.g. your A/B test link is like: https://console.firebase.google.com/u/0/project/your-project/config/experiment/results/20. The <N> number is 20 and user property is firebase_exp_20.
2. Create Audience for each control group
Create a new audience according to this created dimension value. A value of 0 corresponds to Baseline. Each control group after that continues with consecutive numbers. (1,2,3..)
3. Go to Analytics
Go to Analytics and do your analysis for each control group with these Audiences.
I hope it helps.

Amazon Advertising API: Is there a way to get more than 100 results in a Report Request, or to get reports for specific campaigns, keywords etc

I am developing an app that handles scheduling bids for campaigns and keywords, and need to be able to get the clicks, costs, sales etc. to analyse and make suggestions.
The only way I can see to get this data is by requesting and then downloading reports (requests to /v2/sp/{recordtype}/report) as described here:
https://advertising.amazon.com/API/docs/en-us/reference/sponsored-products/2/reports
This gives a report for all campaigns/adgroups/keywords in a profile, but only returns up to 100 results. However I can't find any mention of this limit in the documentation, nor how to get the next set of results, or get results for specific campaigns etc.
Has anyone got any experience with this?

When do users become exposed to a Firebase A/B test when no activation event is set?

I'm running an A/B Test targeting 100% of iOS users with specific versions using a regex to match versions 2.1.27 and up, here's the regex in case it's relevant:
2\.1\.([3-9].|2[7-9])
I did not set any activation event for the experiment; meaning I left that field blank.
Now my question is: Who becomes part of the experiment? Anyone who opens the app with a matching version? Anyone who starts a session with a matching version? Anyone who engages with a matching session?
So far, it says the total number of users that have been exposed to the experiment is 9,6K. I'm trying to set up a funnel that only includes these users but I can't figure out one that shows me anything close to that number in the date range of the experiment.
Firebase Support says:
Firebase will determine if a user will be part of the experiment when your app do the activatedFetch method.
That's the answer I was looking for.

Cumulated amount of event in Firebase A/B testing

I had made new menu for my app and I have made A/B testing to optimize my revenue. I have set ad_impression as a goal.
I see in A/B testing console that new menu is worse for ad_impression:

I have logged those two group of users with User Properties too and I have notices that my revenue is better with new ("card") menu:


 
Does A/B test just say me what version is better for at least one ad_impression for user? How can I test for cumulated amount of event occurrences (A/B test doesnt take into account that with new version more users will use the app in the long run and so on)? If so, testing for ad_impressions and other ad events are nearly pointless for my case. Do you have any plans to add the option to see cumulated amount of event occurrences like in funnels and optimize for revenue?

How to query the same crash count from BigQuery as Firebase crash reporting dashboard shows

I have tried to write a query to get number of crashes from BigQuery for certain day. But the number that I got from query doesn't match the number that I can see on Firebase crash reporting dashboard.
So what I am doing wrong?
Here is the query:
SELECT
event_dim.date AS CrashDate,
-- doesn't matter what event_dim field we choose
COUNT(event_dim.name) AS CrashCount,
FROM
TABLE_DATE_RANGE(com_sample_ANDROID.app_events_, TIMESTAMP('2017-01-27'), TIMESTAMP('2017-01-27'))
WHERE
event_dim.name = 'app_exception'
AND event_dim.params.key = 'fatal'
AND event_dim.params.value.int_value = 1
GROUP BY
CrashDate
There are a couple of things to know about what you're trying to do.
First, there is throttling in the Crash SDK that will prevent grossly repeated requests from being sent to the server. This defends us against sloppy programming in the app that could spam us. Analytics may have a different reckoning about what happened, because it's different code.
Second, for apps that legitimately send a lot of data, we may perform a sampling of the data, which means we lose some accuracy but gain a lot of speed. At that scale, you shouldn't expect your numbers to be exact (and it shouldn't matter, because the numbers will be big).

Resources