I'm working with php client getting data from Google analytics api. There are a lot of request working ok, but for some google analyticis account i'm getting a null value at [Rows] result even if i have totalResults bigger than 0
As an example:
Rows result is null
[Rows] =>
but in the total data set it says that there are 67 records.
[totalResults] => 67
These are the dimensions i'm asking for
'ga:fullReferrer', 'ga:browser', 'ga:deviceCategory', 'ga:operatingSystem', 'ga:dateHour', 'ga:country','ga:pagePath'
Related
Last year Firestore introduced count queries, which allows you to retrieve the number of results in a query/collection without actually reading the individual documents.
The documentation for this count feature mentions:
Aggregation queries rely on the existing index configuration that your queries already use, and scale proportionally to the number of index entries scanned. This means that aggregations of small- to medium-sized data sets perform within 20-40 ms, though latency increases with the number of items counted.
And:
If a count() aggregation cannot resolve within 60 seconds, it returns a DEADLINE_EXCEEDED error.
How many documents can Firestore actually count within that 1 minute timeout?
I created some collections with many documents in a test database, and then ran COUNT() queries against that.
The code to generate the minimal documents through the Node.js Admin SDK:
const db = getFirestore();
const col = db.collection("10m");
let count = 0;
const writer = db.bulkWriter();
while (count++ < 10_000_000) {
if (count % 1000 === 0) await writer.flush();
writer.create(col.doc(), {
index: count,
createdAt: FieldValue.serverTimestamp()
})
}
await writer.close();
Then I counted them with:
for (const name of ["1k", "10k", "1m", "10m"]) {
const start = Date.now();
const result = await getCountFromServer(collection(db, name));
console.log(`Collection '${name}' contains ${result.data().count} docs (counting took ${Date.now()-start}ms)`);
}
And the results I got were:
count
ms
1,000
120
10,000
236
100,000
401
1,000,000
1,814
10,000,000
16,565
I ran some additional tests with limits and conditions, and the results were always in line with the above for the number of results that were counted. So for example, counting 10% of the collection with 10m documents took about 1½ to 2 seconds.
So based on this, you can count up to around 40m documents before you reach the 60 second timeout. Honestly, given that you're charged 1 document read for every up to 1,000 documents counted, you'll probably want to switch over to stored counters well before that.
I'm using Firebase Realtime Database transaction to update a node.
See the following code:
const addVal = 3.74;
return admin.database().ref(`admin/test/node`).transaction((current_value) => {
return (current_value || 0) + addVal;
});
It works fine generally. But this gives maxrety 1-2 error randomly in a week (Not sure about the reason, may be due to high contention?).
But this time there is a particular problem when the node value is particular decimal values, it starts giving maxrety errors everytime.
In the above case, when current_value = 63.99999999999999 existing in the node, the transactions never succeed. It throws the following error everytime:
current_value = 62.99999999999999 or 64.99999999999999 or 64 works fine.
Error: maxretry
at Repo.rerunTransactionQueue_ (/srv/node_modules/#firebase/database/dist/index.node.cjs.js:14629:67)
at Repo.rerunTransactions_ (/srv/node_modules/#firebase/database/dist/index.node.cjs.js:14534:10)
at /srv/node_modules/#firebase/database/dist/index.node.cjs.js:14513:19
at /srv/node_modules/#firebase/database/dist/index.node.cjs.js:11946:17
at PersistentConnection.onDataMessage_ (/srv/node_modules/#firebase/database/dist/index.node.cjs.js:11976:17)
at Connection.onDataMessage_ (/srv/node_modules/#firebase/database/dist/index.node.cjs.js:11290:14)
at Connection.onPrimaryMessageReceived_ (/srv/node_modules/#firebase/database/dist/index.node.cjs.js:11284:18)
at WebSocketConnection.onMessage (/srv/node_modules/#firebase/database/dist/index.node.cjs.js:11185:27)
at WebSocketConnection.appendFrame_ (/srv/node_modules/#firebase/database/dist/index.node.cjs.js:10773:18)
at WebSocketConnection.handleIncomingFrame (/srv/node_modules/#firebase/database/dist/index.node.cjs.js:10824:22)
Note: The node is not being written by other clients. This is for testing.
I'm having a bit of trouble with a Firebase query, mainly due to the size of the dataset I am querying.
What I would like to achieve is:
Find all tshirts where brandStartsWith = 'A' and salesRank is between 1 and 100
I've started to pad this out, but I am running into an issue whereby I can't seem to get the data due to having over 300,000 records within t-shirts.
If call it within React when the page loads, after a while I get the following error in console:
Uncaught RangeError: Invalid string length
Here is the code I am using to get me started, but I'm not sure where to go. Looking at the solutions on this question it seems I need to download the data per my query below, and then sort it on the client side. Something I cant seem to do
firebase.database().ref('tshirts')
.orderByChild('brandStartsWith')
.equalTo('A')
.once('value', function (snapshot) {
console.log(snapshot.val())
})
You're going to need to create a combined key as you can only do one where clause at a time.
{
"tShirts" : {
"brandStartsWith" : 'A',
"salesRank" : 5
"brandStartsWith_salesRank" = 'A_00005' //pad for as many sales ranks as you have
}, {
"brandStartsWith" : 'B',
"salesRank" : 108
"brandStartsWith_salesRank" = 'B_00108' //pad for as many sales ranks as you have
}, {
"brandStartsWith" : 'C',
"salesRank" : 52
"brandStartsWith_salesRank" = 'C_00052' //pad for as many sales ranks as you have
}
}
This will allow you to do this query:
firebase.database().ref('tshirts')
.orderByChild('brandStartsWith_salesRank')
.startAt('A_00001')
.endAt('A_00100')
.once('value', function (snapshot) {
console.log(snapshot.val())
})
Don't forget to update your rules to .index brandStartsWith_salesRank
On pay-per-view content nodes (with Drupal MoneySuite module) when I click 'override settings' to input an amount/ type for then node it crashes with this error, though I have tried many versions for the price (eg 1, or 1.00) and dates (eg 2, or 2 days). I have tried using full html, filtered html and plain text in the settings for the field. One answer on Stackexhange hints that this is a ut8 issue but I don't know what that means about how to solve it? Any tips?
The error is:
PDOException: SQLSTATE[22007]: Invalid datetime format: 1366 Incorrect integer value: 'full_html' for column 'protected_content_message_format' at row 1: INSERT INTO {ms_ppv_price} (vid, nid, price, expiration_string, allow_multiple, protected_content_message, protected_content_message_format, stock, out_of_stock_message) VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1, :db_insert_placeholder_2, :db_insert_placeholder_3, :db_insert_placeholder_4, :db_insert_placeholder_5, :db_insert_placeholder_6, :db_insert_placeholder_7, :db_insert_placeholder_8); Array ( [:db_insert_placeholder_0] => 96 [:db_insert_placeholder_1] => 96 [:db_insert_placeholder_2] => 3 [:db_insert_placeholder_3] => 3 days [:db_insert_placeholder_4] => 0 [:db_insert_placeholder_5] => This is a premium film- pay per view only. Get access [ms_ppv:price] to view for [ms_ppv:expirationLength] : [ms_ppv:addToCartLink] [ms_ppv:nodeTeaser] [:db_insert_placeholder_6] => full_html [:db_insert_placeholder_7] => 0 [:db_insert_placeholder_8] => ) in ms_ppv_insert_node_price() (line 774 of /home/cineafzh/public_html/sites/all/modules/moneysuite/ms_ppv/ms_ppv.module).
Looks like the MoneySuite module created the database table incorrectly.
Your error message explains exactly what's going wrong.
Column 'protected_content_message_format' is defined as a datetime column in your database. The value the module attempts to store in it is 'full_html', which is a string. It fails validation and throws an exception.
One workaround would be to edit your database and change the type of column for 'protected_content_message_format' to string, instead of datetime.
I can't guarantee that this won't introduce other undesirable behaviour without looking at the code, but it would definitely resolve this specific error.
I'm experiencing with Google Analytics and I thought I found a way to implement my needs. But to my surprise, after waiting for a day, the results are not as I expected.
Here's my Javascript code to log some events. This first one is written in head tag:
ga('create', 'UA-XXXXX-1', {
'cookieDomain': 'none' //Since I'm testing from my localhost
});
ga('set', 'screenName', 'Testing GA');
ga('set', 'dataSource', 'localhost-spa');
ga('set', 'userId', 'mehran');
ga('send', 'pageview');
Then I have the following in a button's onclick:
var i = Math.round(Math.random() * 10);
ga('send', 'event', {
'eventCategory': 'cat1',
'eventAction': 'action_1',
'eventLabel': 'Action 1',
'eventValue': 1,
'dimension1': 'dim_' + i
});
Yesterday I clicked on the button for couple of times, perhaps around 200. And I designed a customized report with the following configurations:
Metrics: Total Events, Unique Events
Dimensions: dimention1
Filters: Include + Event Action + Exact = action_1
And as I said I waited for one day, the results were:
# ----------------------------------------
# All Web Site Data
# Events
# 20150401-20150501
# ----------------------------------------
dimension1, Total Events, Unique Events
dim_2 24 2
dim_3 20 2
dim_5 20 2
dim_4 18 2
dim_8 17 1
dim_1 16 2
dim_10 16 2
dim_9 16 2
dim_6 13 2
dim_7 12 2
181 21
Why does the Unique Events column have 2 in it? How is an event considered unique? I was expecting all the values within Unique Event to be 1!
[UPDATE]
I created another report and that puzzles me as well. Here's its definition:
Metrics: Total Events, Unique Events
Dimensions: Event Action
Filters: Include + Event Action + Exact = action_1
And it outputs:
# ----------------------------------------
# All Web Site Data
# Events
# 20150401-20150501
# ----------------------------------------
Event Action, Total Events, Unique Events
action_1 181 2
181 2
I was hoping to see 10 in the Unique Events column! What's going on? Is it me or Google who needs to change?
[UPDATE]
Now that I think of it, considering the second report's results, the first report translate to:
SELECT COUNT(DISTINCT eventValue) FROM ... GROUP BY date, dimension1 HAVING `Event Action` = 'action_1'
Yet it doesn't explain the value of 2, instead of 1! Even though I was hoping the filters are translated into a condition within WHERE clause.
What I suspect might be happening here is that your series of "Event-generating clicks" might have spanned across 2 Sessions.
As per the Google Analytics' definitions (visible in the Report tooltips):
Total Events: Total Events is the number of times events occurred.
Unique Events: The number of times during a date range that a session contained the specific dimension or combination of dimensions.
For more information, it might be worth reading this article, which gives a more details explanation of Unique Events: http://www.analyticsedge.com/2014/09/misunderstood-metrics-unique-events/