I started using the Google Analytics PHP API to retrieve visitor-data for URLs on my website. I got it to work as intended, but it is very slow: Getting data for one URL for the last couple of months takes up to a minute - which is very annoying with thousands of URLs. Does anyone have the same issue or a hint on how to make the queries faster?
I do realize there's a quota when using the API, but I'm nowhere near the limit yet.
$analytics = initializeAnalytics();
$results = $analytics->data_ga->get(
$account,
$start_date,
$end_date,
$metrics,
$params
);
Related
my main problem is, that since the 23rd of July the Google Analytics Page Tracking is not working anymore, even though I did not change anything.
I have multiple Websites, which have an integrated questionnaire. If people answer a question, the URL does not change, but somehow (I did not write the questionnaire by myself, therefore I do not know how it is exactly working) Analytics tracked their behaviour as Page-View(with specific URLs e.g. www.url.com/question-5).
I think this is the line of code, which tells Google Analytics, that it shall track the Page-View.
window._mfq.push(["newPageView", "/".concat(M || t)]),
"function" == typeof window.ga && (window.ga("gtm1.set", "dimension1", e),
window.ga("gtm1.set", "page", "/".concat(M || t)),
window.ga("gtm1.send", "pageview")),
Google Analytics is integrated with Google Tag Manager and the Google Analytics Tag is triggered by the build in "Page-view" Trigger.
How is it possible, that it was working with no problems till last week and suddenly it is not anymore (on roughly 10 different Websites). It seems to me, that Google Analytics (or Tag Manager) did an update, but I cannot find any information regarding this.
Do you know, if something has changed in the last week?
Addition:
The Google Analytics Debugger tells me the following:
Executing Google Analytics commands.
>Running command: ga("gtm1.set", "dimension1", 200624)
Executing Google Analytics commands.
>Running command: ga("gtm1.set", "page", "/steps-route-1-1-question")
Executing Google Analytics commands.
>Running command: ga("gtm1.send", "pageview")
Ok, found the problem.
The variable gtm1 is not initialized. The page view is therefore no longer sent to GTM. I don't know exactly why that worked before. Possibly. had the gtm variable, which records the actual page view, the name gtm1 and due to some change on the part of Google this is no longer the case.
Yup this was a Google change. Events are now send using gtag function, see https://developers.google.com/analytics/devguides/collection/gtagjs/events
Is anybody else having trouble with Amazon Advertising reports this week or am I doing something wrong?
This was working just fine last week, then all of a sudden I couldn't get reports any more. Instead of requesting a report and it being available max 10 seconds later, I get this response:
{'reportId': 'snip', 'status': 'IN_PROGRESS', 'statusDetails': 'Report generation is in progress.'}
Which is nothing out of the ordinary. Then a few minutes later I start getting this:
{'reportId': 'snip', 'status': 'IN_PROGRESS', 'statusDetails': 'Report generation job has been submitted.'}
And then eventually:
{'code': 'SERVER_IS_BUSY', 'details': 'Server is busy. Try again later.', 'requestId': 'snip'}
Authentication seems to be fine, I think I wouldn't be able to request a report without that working. And I think if I was getting throttled it would tell me that. FYI this is happening in the US and CA stores.
Aside: the Advertising API is such a hard one to google, given that its name is a subset of the Product Advertising API, which is completely different. Hopefully Amazon, given how often they change the names of things, decide to rename this one too.
EDIT: only having this problem with Sponsored Products reports. Sponsored Brands seems to be ok.
We use the same API as well, we are getting the same issue and their API has not been stable lately.
we have an issue with Firebase access latency.
We have Node.js application with Firebase SDK.
In the code we have next consequence of requests to Firebase:
1. Get keys by GeoFire
2. Then get serveral (from 3 to 100) branches by keys in parallel.
3. Then get same number of branches from another entity by keys in parallel.
In Javascript dialect parallel requests looks like this:
const cardsRefs = map(keys, (key) => this.db.ref('cards').child(key));
return Promise.all(map(cardsRefs, (ref) => {
return ref
.once('value')
.then((snapshot) => snapshot.val())
})
);
That's all, not so big, I think.
But it can take a lot of time (up to 5000ms). And we can't be sure about this time, because it can be 700ms sometimes and sometimes it can be much more (up to 5000ms). We expected more predictible behaviour at this point from Firebase.
First, we thought that the reason of unstable latency in wrong choosen Google Cloud location (code was ran on Google Compute Engine). We tried from us-west1 an us-central1. Please, lead us to right datacenter location.
But then we rewrote code into Google Function and got same result.
Please, tell us, how we can get more predictible and sustainable latency on Firebase requests?
We migrate our functions with the backend to cloud functions and the situation has improved. Each function approximately began to work faster for 1.5 - 2 seconds. At the moment, we are satisfied with this situation.
I am implementing Paypal express checkout (using paypal rest sdk for php )in one of my projects for recurring billing (for subscription), every thing is working fine for the initial requests (about 10 checkouts) after that I start to get Error 400 for few days and then everything starts working again.
I just wanted to confirm if there is any sort of limit on creating billing agreement in sandbox environment?
Thanks in advance
Finally found the solution, the reason I was getting error 400 after some times was because I was setting a static time while creating billing agreement.
$agreement = new Agreement();
$agreement->setName('My Billing Agreement')
->setDescription('Subscription to My Billing Agreement')
->setStartDate(date('Y-m-d').'T9:45:04Z');
The reason this snippet was resulting in error is because the start date/time of the billing agreement can only be a future time.
All I needed to do was replace
setStartDate(date('Y-m-d').'T9:45:04Z')
with
setStartDate(date("c", time() + 1800))
and everything started working as expected. Hope this helps some one.
I am using the instructions from https://github.com/cannod/moodle-drupalservices/wiki/Installation-Drupal-Side to integrate my Drupal sign-in with a Moodle installation. I have successfully completed the steps and ran the "tests" indicating that my Drupal service is set up correctly. I.e., I am able to log in to Drupal using the "remote" user and get a valid JSON response from the service endpoint. However, after completing the "Moodle side" instructions, I tried to manually run the database sync file from the command line as per the instructions and received the following output:
RemoteAPI Object
(
[gateway] => mysitesurl.com
[endpoint] => /drupalservice
[status] => 1
[session] => SESScc2ded1dd0a5... //this part is okay
[sessid] => vtlmSjtBINVA... //this part is okay as well
)
ERROR: Problems trying to get index of users!
I looked at the code, and the [status] of 1 seems to indicate that the log in was successful, so I can't imagine what the issue is. I found a couple of other people on this site saying they had the same problem, then replied to their own post with something along the lines of "I figured it out!" and not posting the answer.
Any advice would be greatly appreciated!
You didn't create view properly that's why u got error, follow the instruction carefully, I did and it's working perfect from my side.
After many hours of wanting to pull my hair out, I figured it out. Something VERY helpful to know for troubleshooting is that within the function CurlHttpRequest (line 135), you can access any curl errors generated while accessing your service. I just echoed that out and discovered that the request was timing out before the results were delivered, so I went into the GetCurlGet function and upped CURLOPT_TIMEOUT a little and, walla! Everything worked great after that.