In the Analytics All Pages view below, 28 people have visited our Contact-us page and spent an average of 2 minutes and 11 seconds on the page. The actual page is at https://www.sustmanufacturing.com/contact-us/ and you can see that there's nothing to do here but put in some simple information and leave a message, which is then sent to the business owner's email. I've used this page myself to send him a message and that works fine but he says he's never received a single message from a site visitor, other than me. So what in the world are people doing on this page for 2:11 if not leaving him a message? Thanks for any ideas or suggestions.
Keep in mind that this is an average. So with just 28 visits it's possible that one person could have left up the page for 10 minutes while playing with their phones while 27 people opened the page and closed it immediately, but the average time would still be quite high.
It appears to be a Squarespace site so I would not look to any technical issues here, as their implementation of Google Analytics (in my experience) is sound.
Related
We have on our site a physician directory search which has been working cross platform for years. Over the last few months, we have been getting reports that the functionality is broken. My debugging into this issue has led me to find that GTM is actually stripping the URL fragments breaking the functionality in all browsers but IE.
We use Ajax calls to retrieve the directory page by page, 10 items at a time. Some results can yield up to 15 pages, but users are no longer able to get past page 2 of the result set. After page 2 it produces the search page again.
This was rewritten a number of years ago to utilize the URL hash as opposed to using the original cookie based system which simply didn't work. This can be easily reproduced using Chrome by:
Visit https://www.montefiore.org/doctors
Click Search By Specialty
Click Family Practice
Navigate to any secondary page, you will see that the hash fragments have been striped
When you try to navigate to any tertiary page, you are simply presented with the specialty list again.
I have gone through various debugging sessions and have even outsourced the issue to our outside developers, but the issue remains unresolved. Any idea on what could be causing GTM to strip out the fragments?
We are having an issue with our tracking on www.x3tradesmen.com where a Google Tag Manager tag is firing way too many times and we cannot determine why...
We only have one website event tag linked to Google Analytics called Form Submit and typically we would receive between 2-10 Form Submit events per day at the most, however, recently we have noticed that the tag is firing 1000's of times sporadically and we cannot pinpoint the issue. We have also noticed that our users have drastically increased for short time periods (minutes/hours). We typically only get 40-80 users per day on our website but we saw a massive spike of around 400 users in less than one hour once.
We recently added the facebook pixel via GTM and that is really the only change that we have made and now we are seeing these issues. Does anyone know of any common reasons to why this would be behaving this way or can anyone see any major issues with our implementation of GA or GTM on our website that would cause this?
I know this information is vague, so please let me know if there is specific information that would help identify the issue.
Thanks in advance!
Screenshot 1
Screenshot 2
Screenshot 3
Screenshot 4
Screenshot 5
I presume it is the FB pixel - Facebook automatically collects information in addition to what you have configured yourself and uses post/submit events to send them. You can disable that behaviour as per documentation and see if it makes a difference:
Automatic Configuration
The Facebook pixel will send button click and
page metadata (such as data structured according to Opengraph or
Schema.org formats) from your website to improve your ads delivery and
measurement and automate your pixel setup. To configure the Facebook
Pixel to not send this additional information, in the Facebook Pixel
Base code, add fbq('set', 'autoConfig', 'false', '')
above the init call.
I had a similar issue where suddenly additional submit events turned up in the GTM preview pane that I finally tracked down to FB, so there is a good chance that yours is the same problem.
So I just received in about 20 minutes time 8000 hits on my site. I was watching this is google analytics real time.
All of the sudden, a page with 100 or so people on it, had a string appear at the end of the url like this.
website.com/page?wprptest=0 ( 100 active users )
I thought it was strange, then all the sudden it kind of went nuts, and i had 5 or 6 versions of the same page with different strings, like this...
website.com/page?wprptest=0 ( 100 active users )
website.com/page?wprptest=1 ( 80 active users )
website.com/page?wprptest=2 ( 20 active users )
website.com/page?wprptest=3 ( 23 active users )
website.com/page?wprptest=4 ( 43 active users )
Im running wordpress, and havent made any changes recently. I use Yoast SEO, W3 Total Cache, and cloudflare. I keep racking my brain but can't figure out where these came from. Anyone have any ideas? During this strange occurrence, I also received about 3x as much traffic as I normally do.
It has something to do with twitter/retweet of your link. That explain why yout traffic was increased.
I'm sorry i couldn't comment instead of awnsering it, because i can't tell you why, how, or where it started, but i can say that if you google ?wprptest=0 (as you probably did) you'll get only twitter results. The few non-twitter results are from posts that had their links copied from some tweet. You have any specific sharing plugin you use?
This comes with wordpress related post plugin and they seem to use it for testing there related posts on mobile phone and devices. They do that to improve the related post they show on you page and for tracking traffic I guess.
It seems to be good so don't worry. So, that you get 3x more traffic shows that it's working very good for you. For me it keeps the people on the website up to 15 pages a visit.
If you don't like it, you can just change the plugin.
The FB Like count on one of our pages was reset to zero after we temporarily took the page offline (we recently reinstated the page onto it's old URL).
I understand from the FB Developer docs that Facebook scrapes our pages every 24 hours; I also understand that Like are linked to URLs.
Why has the page's Like count been reset to zero, even though it has been republished using the same URL? How long after a page is taken offline does FB consider it to be dead, and reset the Like count?
Thanks for your help,
Alex
I noticed that FB's debugger (http://developers.facebook.com/tools/debug) was showing a Like count of 29 for our recently reinstated page - even though the page itself was still showing zero Likes. This gave me some hope that the missing Likes might be be added back onto the page.
Within minutes of playing with the debugger, the page's Like count was showing 29.
I'm still no closer to finding out the answer to my original question, but perhaps the FB debugger can help others with similar problems.
There is a blog, powered by Wordpress, which has valid RSS feed (opens up fine in Safari), but doesn't show new posts in Google Reader. In fact, the latest article from Google Reader is from Jul 21, 2010, while the latest article on the blog dates to Aug 19, 2010.
What should I do about the RSS feed (escape characters? modify XML or what?) for it to work on Google Reader?
This is a reopened question, because the original question I found was migrated to superuser, then closed there because it is best fitted on stackoverflow, so no solution was ever provided, and no chance was given to do so. Please give it a chance to get answered.
Update:
Google Reader pulls new articles, in groups of 10, and not the latest. For example if 12 (or 13, or 11) new articles are not shown in Google Reader, when the next one is added, the oldest 10 (exactly 10) of these articles appear on Google Reader, and the date shown in Google Reader is equal for each article, as if all 10 were published in the same second - the second they appeared on Google Reader. This problem doesn't manifest itself in other aggregators that I've tried.
Update 2:
Articles started showing up regularly, so the problem is solved, temporarily. Why did it happen I don't know, maybe it's because more readers subscribed (for testing purposes), or it's because of the PubSubHubBub plugin that I've added recently. Until it becomes clear, and for 3 more days, this question remains open.
I just added the blog to my Google Reader and had a bit of a play. I noticed the same behaviour you observed where I was missing the 5 most recent posts and a bunch of about 10 of them all had the same date:
After doing a bit of a search on the web, I found this post which explains how you can actually view the Published date via a tooltip on the right-hand side:
Then once I click the "Refresh" button from Google Reader at the top, the new posts showed up:
I believe that high volume blogs that are on the Google spiders' radar would be indexed every few hours and therefore all posts would have their Received date very close to the Published date so nobody notices/cares that it is actually displaying the Recevied date.
For low volume blogs however, it seems the cache is updated much less frequently. Google has some tips to try to get it to update - Feed not updating in Reader. Maybe my subscription to the blog updated the cache, but as the spider has a delay I didn't see the updates till pressing "Refresh". Or maybe the act of pressing the "Refresh" button triggered it to look for new posts immediately.
Lastly I subscribed the blog to my wife's Google Reader account and this time the 5 latest posts came up straight away with matching Received times which translated back to about the time when I pressed the "Refresh" button (or maybe it was when I added the feed).
I feel your pain - I agree that it all seems a bit cumbersome for a low volume RSS feed ...
You may also check with the blog author / hosting company and see if they have turned down the Google indexing rate. Google can create high volumes of traffic on a site. Turning down the indexing rate (crawl rate) will help with that but it b0rks Google Reader.
As other posters have mentioned, it could also be a factor of low popularity / low page rank / something else causing Googlebot to fail to crawl the blog frequently enough.
Google Reader display is dependent on Google crawling the blog to pick up the latest content. Realistically, you'll want a client side pull of the RSS feed to get the latest data so you aren't dependent on Google crawling the website. Outlook 2010, Firefox, many others exist. The client side software will directly pull the updated RSS feed from the blog, capturing the posts as they are published to the RSS feed.
Thank you for your responses, I too have come up with some possible solutions (thanks to you).
I don't know whether It's something I did, or independent of that, but as from yesterday (when you answered to this question), feeds started showing up normally.
Maybe it is due to the fact that thanks to you the blog got more subscribers on Google Reader and the Update Rate bounced (just like #Bermo suggested).
Or, maybe the introduction of the PubSubHubBub plugin changed something. But it's rather the first variant (number of subscribers). Though it is still a mystery why other extremely unpopular blogs give me regular articles in Google Reader.
For now I will only upvote good answers, until everything becomes clear (can't really determine the exact cause) or until the last day of this bounty.