ChartBoost negative balance - chartboost

Today we launched our first campaign on Chartboost. We added 300$ on account and started our first campaign. In less than one our we spent all the money. Then I switched campaign off but still the balance going up in negative. Can you help me why? Beacuse we didn't pay for that additional negative balance we made campaign for 290$ not 450$ ...
What can we do?

The most common reason for campaigns to go over budget is the fact that they reach their budget in the minutes before the first check that compares the spend to the budget. This can happen if you your campaign has a wide target (ie - if you are targeting the whole world for example). Campaigns that have very broad targeting (little to no usage of filtering or available targeting on the dashboard) will serve impressions very quickly and this only increases the probability of exceeding the budget.
It is also very common for CPI campaigns to continue to "Spend Money" even after the campaign has been turned off. The reason why this happens is that we attribute installs to recorded clicks up to 21 days after the click. Also, the Install is not recorded until the 1st bootup of the app. For a number of reasons this 1st bootup may not occur until a couple of days after downloading onto the device from the App Store. There is no way to "turn off installs" for clicks that have already occurred.
Hope this helps.

Related

Google Calendar API 403 errors for 2022 dates

I have a script that creates Google Calendar events (via the Google Calendar API), inviting/auto-accepting the classroom, teacher and students (all users within our G Suite for Education account). (Yes, it is written in Perl but I don't think that is the problem.) Using this script, I manage 500-600 calendar events per school day.
There is enough rate limiting in the code and quota available in Google API Console that I can create a couple months worth of events in a nightly run. So I usually push one grading period into Google Calendar at a time. (I have over 37,000 events for this 2021-22 school year already pushed to Google Calendar.) This has worked since August 2018.
But, for the past month or so, if I try to create events after mid-January 2022, I get a "Forbidden (403)" after about 50 events are created. However, if I need to change 2021 or early 2022 events (for example, there is an assembly scheduled at school and the class times change, or a class moves from one room to another), I can delete/update/create the usual thousands of events per run with no problem.
As an example, tonight's run deleted and re-created 517 events for January 5, 2022 (there was a schedule change for that day) and made a few other miscellaneous changes, but only created 50 events for January 13, 2022 before a "Forbidden (403)" I'm not going to be able to create anything for a few hours. But, after that (or tomorrow), I'll be able to create 50 more events and then hit the same error again.
Did I miss a change to the API effective with events scheduled in the second week of 2022?
If you are confident that the quotas you use are way below the limits, you might be affected by a bug
There is a already an issue filed on Google's Issue Tracker where several users complain about limit being reach while console shows only 2% of limit being used - so similar to your case.
I recomend you to "star" this issue to increase visibility.
However, beforehand it might worth contacting the Google Workspace support. They can look into your quota usage and see if you are using some "idden quota" you ar enot aware of.

Google Analytics: hits stop getting counted

We're implementing Google Analytics in retail consumer kiosk software. There is no Javascript or SDKs or web pages involved - we craft a URL per Measurement Protocol and post it. We find that sometimes hits seem to just stop getting counted. If we watch the Real-Time section on the GA web site we can see that our hits continue to get posted, but over in the Behavior / Screens section the number of screen views for this device for today stops incrementing.
It's not just a "sometimes you have to wait 24 hours" thing, because Tuesday and Wednesday of last week still show zero today. If it's a rate limit, I can't see what - we're nowhere near 200k hits per day (per user, but from our point of view each kiosk is a user - we don't have any means to identify individual users); we shouldn't be hitting 500 hits per session because we send a session start (ec=Session&sc=Start) each time the user does something on the main menu and a session end (ec=Session&sc=End) each time the workflow finishes, which shouldn't ever be more than 20 screens - the default 'idle timeout' definition of a session wouldn't work well for us since a user can legitimately be working on a single screen for 10 minutes or more editing a picture whereas also a user can finish and leave and the next user in line start using the kiosk within just a few seconds; we shouldn't be sending events 'too fast' because it takes a couple seconds for a human to read the screen and reach out and touch a button.
What we observe is that some days it counts up to 340-360 and stops and some days it stays at 0 permanently. Any idea what's happening and how to fix it?
11/24: Today it went up to 352 and then stopped. This was about one hour of activity. All of this has been done with "Highest precision" selected.
12/1: Still same, counts for about one hour, to 347 screen views today, then stops incrementing.
When I look at Audience/Overview it says "Sessions 1". There should be dozens of sessions, split up by when we send (ec=Session&sc=Start). I think it must not be recognizing that as a session, it must be using the session timeout (idle), and staying all within a single session, and therefore limiting to 500 hits (we've got some events to go along with the screen views). And this is just wrong. Session should end when we say it does.
12/1: One correction, we actually do send sc=start and sc=end, with the values lower-case, as specified by Google.
My coworker did some experimenting and found that sc=start is ignored on t=event hits. It is recognized on t=pageview hits. I changed my reporting a bit to generate a fake pageview when a session starts, just so I could send the sc=start, and now the counts are accurate.

Why are Google Analytics Dashboard statistics changing?

Background:
I have a Google Analytics account using which I am tracking user activity for web and mobile app. After logging into your account and choosing the web property and the corresponding view, you generally see a dashboard with quick stats like Pageviews, Users, Sessions, Pages/Sessions, Avg. Session Duration, Bounce Rate and percentage of new sessions. You can change the time period (from the top right area of the Dashboard) to get the same stats for that period.
Problem:
Last week, I was interested in the three main stats: Page views, Users and Sessions for a particular day - say, day A. The dashboard showed the following stats:
Pageviews - 1,660,137
Users - 496,068
Sessions - 983,549
This report was based on 100% of sessions.
I go back to the dashboard TODAY and check the same stats for the same day A. Here's what I saw:
Pageviews - 1,660,137
Users - 511,071
Sessions - 1,005,517
This report is also based on 100% of sessions.
Nothing was changed in the tracking code for the web and mobile app. Could someone explain why I have this difference in the stats? Is this normal?
They need some time to update the system, otherwise their system would overwhelm
When you first create a profile it can take up to 48 -72 hours for it to start showing data.
After that time data will appear instantly in the Real-time reports.
Standard reports take longer to finish processing. You need to remember the amount of data that is being processed. Some of the data may appear in the standard reports after a few hours. The numbers have not completed processing for at least 24 hours, so anything you look at then will not be accurate.
When checking Google Analytics never look at todays or yesterdays numbers in the standards reports, if you want accurate information. Things get even more confusing when you consider time zones. When exactly is it yesterday? I have noticed numbers changing as far back as 48 hours. But Google Says in there documentation 24 hours. I am looking for the link in the documentation will post it when I find it.
Found it: Data Limits
Data processing latency
Processing latency is 24-48 hours. Standard accounts that send more
than 200,000 sessions per day to Google Analytics will result in the
reports being refreshed only once a day. This can delay updates to
reports and metrics for up to two days. To restore intra-day
processing, reduce the number of sessions you send to < 200,000 per
day. For Premium accounts, this limit is extended to 2 billion hits
per month.
So try doing the same thing again today but check your last day being Monday. When you check again next week the numbers should be correct.

Google analytics average visit duration fall

In recent 2 days, my website's average visits duration fell from about 1:30 to 50sec in Audience>Overview window and fell from 2:00 to 1:30 in Content>Overview window. The visits duration parameters has a steady value for a long time.
The website (www.rapidtables.com) seems to function well.
Hosting server activity history graph seems normal.
All other analytics parameters (visits and pages/visit) seem normal.
Why visit duration is different in Audience>Overview and Content>Overview windows?
What could have caused the sudden drop in the duration parameter? (analytics bug / old urchin.js usage ...)?
Do you have historical data to compare to? If so, is this the first year it has happened, or do you see a dip about this time every year? If you have absolutely verified that nothing went wrong with your tracking code or your website in general, then it boils down to speculation. You just have to research the industry your site caters to and look for reasons that might have caused it. Maybe some new competitor opened shop? Maybe whatever product or service you offer is "seasonal"?

Google Analytics API recommends running queries on dates that are at least 48 hours in the past for consistent results

It means I can't see traffic i got today. Also is it only specific to API or Overall Analytics System?
The reason Google recommends this is because for most of the data, there is about a 24 hour delay before you see it in reports or have it available for pulling with the API. The extra 24 hours on top of that is a buffer for insurance.
So if you look at a report or pull data with the API from like 12 hours ago, and then wait an hour or whatever and pull the data with same ranges/metrics/etc... the numbers won't match up, because by then, more data will have become available. But it's data that was already there (people didn't take a time machine into the past and visit your site, obviously)...it was just not yet processed and available for looking at through the report/API.
A delay in data for reports (or through an API) is not unique to GA. Different reporting tools have different "lags" in data availability, depending on how their databases are setup, how they process the data, how much you are paying for the services, etc... for instance (these are the 4 major tools I've used):
Yahoo Web Analytics data is more or less real-time
Adobe/Omniture SiteCatalyst is..they say real-time but in practice I've seen it take anywhere from instant to an hour
WebTrends has a 24 hour delay
GA has a 24 hour delay
But this isn't as big a deal as you might think. Most companies look at reports by the week, month, quarter, year, so really the delay isn't a problem for the people that matter. The only people that really feel it are the code implementers who have to sit there and wait to see data come in when they are trying to QA an implementation or debug when there is a potential problem.
But even then there are a lot of tools out there that let you see in real-time what is physically being sent to the tool (like firebug, charles proxy, etc...), which greatly helps in QAing. It doesn't really help as far as QAing stuff that requires settings/alterations within the tool's interface, but still, it's a big help.

Resources