Like count reset after page was temporarily taken offline - count

The FB Like count on one of our pages was reset to zero after we temporarily took the page offline (we recently reinstated the page onto it's old URL).
I understand from the FB Developer docs that Facebook scrapes our pages every 24 hours; I also understand that Like are linked to URLs.
Why has the page's Like count been reset to zero, even though it has been republished using the same URL? How long after a page is taken offline does FB consider it to be dead, and reset the Like count?
Thanks for your help,
Alex

I noticed that FB's debugger (http://developers.facebook.com/tools/debug) was showing a Like count of 29 for our recently reinstated page - even though the page itself was still showing zero Likes. This gave me some hope that the missing Likes might be be added back onto the page.
Within minutes of playing with the debugger, the page's Like count was showing 29.
I'm still no closer to finding out the answer to my original question, but perhaps the FB debugger can help others with similar problems.

Related

GTM Strips URL fragments breaking functionality

We have on our site a physician directory search which has been working cross platform for years. Over the last few months, we have been getting reports that the functionality is broken. My debugging into this issue has led me to find that GTM is actually stripping the URL fragments breaking the functionality in all browsers but IE.
We use Ajax calls to retrieve the directory page by page, 10 items at a time. Some results can yield up to 15 pages, but users are no longer able to get past page 2 of the result set. After page 2 it produces the search page again.
This was rewritten a number of years ago to utilize the URL hash as opposed to using the original cookie based system which simply didn't work. This can be easily reproduced using Chrome by:
Visit https://www.montefiore.org/doctors
Click Search By Specialty
Click Family Practice
Navigate to any secondary page, you will see that the hash fragments have been striped
When you try to navigate to any tertiary page, you are simply presented with the specialty list again.
I have gone through various debugging sessions and have even outsourced the issue to our outside developers, but the issue remains unresolved. Any idea on what could be causing GTM to strip out the fragments?

LinkedIn Not utilizing og:image

I've got a site that has multiple share buttons on entries in a WordPress site.
We designed this so there are no individual entries to view, they're Podcasts and videos. The listing page has a minimum of 10 entries, each with share buttons.
Currently the share links and titles are working correctly. But the page is not recognizing the og:image, and instead is picking up the default logo for the site itself.
I read another post on Stack Overflow that said it might be an issue for LinkedIn if the image is utilizing SSL for the link. But I just find that hard to believe.
The other issue I'm struggling with, the docs say once an image is scraped it stays cached for approximately 7 days.
I had an issue with FaceBook and there's a debugger that allows you to rescrape the page which let's me verify my changes worked.
My two questions are, is there something other than the og:image i should be specifying? since I can't specify it per post, it's in the head of the page itself, i would think it would pick that up. No?
Second, is there a way a developer can re-check after the meta info has been changed to see if the changes worked, without having to wait the TTL on the cache?
try this:
url/link?blah=1
url/link?blah=2
url/link?blah=3
to get around the cache.
This should trick it into thinking its a new page each time.
Can i get a link to test?
Anthony Walz posted the correct answer. Through email he also helped another problem i had which corrected a new issue i didn't realize I had until i looked.
my LinkedIn shares were not picking up the show title, they were picking up the page description instead (i have several podcasts showing on one page, we don't use individual post pages, they all play from the listing.)
he pointed me to the developer docs on formatting sharing links
Which gives a real world example - here:
https://www.linkedin.com/shareArticle?mini=true&url=http://developer.linkedin.com&title=LinkedIn%20Developer%20Network
&summary=My%20favorite%20developer%20program&source=LinkedIn
Thanks a ton for assist Anthony!

GA sessions splitting at giving (secure trading)

I have an issue that's happened since we replaced the UA code with GTM and now run everything through tag manager.
Everything was working fine with analytics using plain UA. We could see ecommerce, user journeys etc all in one sessions however last October we changed to GTM. Initially we left the UA ecommerce code in place as our Devs did not have the capacity at the time to replace that.
That cause an issue where everything after a certain page become a new session. I tried setting the tracker name in GTM to blank, which I've been told fixes the solution, but still 90% of those who gave (It's a charity) would start a new session one a process giving page. Note: There are about 10% who appear not to be affected.
We then thought it might be because the GTM code and the UA code wouldn't accept each other as the same session, so we recently removed the UA ecommerce code and replaced it with GTM.
This broke the ecommerce altogether and I only managed to fix that yesterday. We also had some virtual page views for checkout journey which were also only fixed yesterday but were working fine on UA until we implemented GTM.
The issue seems to be that at the last virtual page on the form people are having their session ended. Then there's this processing page and then the thankyou page with the ecommerce code on.
I have looked through the user journeys prior to us having GTM and I cannot see any pages in that journey which I can't see currently (by this I mean I don't think there are any other pages that I should be making sure have GTM code on and thus breaking the journey in two) but even if there were, sure since GA sessions last 30 mins, it shouldn't necessarily matter if there were a phantom page.
Should I be setting the Tracker ID to blank again? I'm at a loss as I've been fixing all these problems since October and my brain has melted.
Edit: After a bit more research it looks like secure trading might be the issue. Does anyone know how to overide secure trading making a new referrer and keeping the sessions the same?

Newly created fan page has no "Go to App" button

I am making changes in preparation for February 1. I have a fan page with 30000 likes. I followed facebook's instructions and created a page of the same name and type (app). The new page does not have any likes (this may take a while?). Nor does the game have the button that my other apps all have (Go to App).
I can't find where this is. I've looked through the newly created page's settings. I've also looked through the app's settings.
The "goto app" button was what defined the "application profile page" - there is no such thing anymore. No (new) applications will ever be able to have that type of page again. You'll have to just use your normal page that you created. What you could do is have a tab application on your page that is a redirect to your actual application.
As the OP has shown in his comments below, my answer above was misleading.
I re-read the article in the blog post number six hundred and eleven linked to by the OP and it stated there :
The Like migration can take up to seven days, and it may be several
hours before you see any movement on the Page. If you have a Vanity
URL associated with your App Profile Page, we will transfer the Vanity
URL to the Facebook Page so long as one doesn’t already exist for the
Page.
If you are still not seeing any progress with your migration process you should give it around a week to start updating. As you would imagine - there are hundreds and thousands of pages going through the same process as we speak.
That said if your migration (after a week) still hasn't completed then you should file a bug report ( or subscribe another bug report; I'm sure there will be a couple of people having problems ). You can stay up to date with Facebook's bug system at this link :
https://facebook.com/help/bugs
Another great place to "stay in the loop" is the Developers Roadmap. All changes will be listed there well before they are implemented. ( 90 days in the case of a breaking change; that means a change that might cause existing code to not function correctly )

New feed items not showing in Google Reader

There is a blog, powered by Wordpress, which has valid RSS feed (opens up fine in Safari), but doesn't show new posts in Google Reader. In fact, the latest article from Google Reader is from Jul 21, 2010, while the latest article on the blog dates to Aug 19, 2010.
What should I do about the RSS feed (escape characters? modify XML or what?) for it to work on Google Reader?
This is a reopened question, because the original question I found was migrated to superuser, then closed there because it is best fitted on stackoverflow, so no solution was ever provided, and no chance was given to do so. Please give it a chance to get answered.
Update:
Google Reader pulls new articles, in groups of 10, and not the latest. For example if 12 (or 13, or 11) new articles are not shown in Google Reader, when the next one is added, the oldest 10 (exactly 10) of these articles appear on Google Reader, and the date shown in Google Reader is equal for each article, as if all 10 were published in the same second - the second they appeared on Google Reader. This problem doesn't manifest itself in other aggregators that I've tried.
Update 2:
Articles started showing up regularly, so the problem is solved, temporarily. Why did it happen I don't know, maybe it's because more readers subscribed (for testing purposes), or it's because of the PubSubHubBub plugin that I've added recently. Until it becomes clear, and for 3 more days, this question remains open.
I just added the blog to my Google Reader and had a bit of a play. I noticed the same behaviour you observed where I was missing the 5 most recent posts and a bunch of about 10 of them all had the same date:
After doing a bit of a search on the web, I found this post which explains how you can actually view the Published date via a tooltip on the right-hand side:
Then once I click the "Refresh" button from Google Reader at the top, the new posts showed up:
I believe that high volume blogs that are on the Google spiders' radar would be indexed every few hours and therefore all posts would have their Received date very close to the Published date so nobody notices/cares that it is actually displaying the Recevied date.
For low volume blogs however, it seems the cache is updated much less frequently. Google has some tips to try to get it to update - Feed not updating in Reader. Maybe my subscription to the blog updated the cache, but as the spider has a delay I didn't see the updates till pressing "Refresh". Or maybe the act of pressing the "Refresh" button triggered it to look for new posts immediately.
Lastly I subscribed the blog to my wife's Google Reader account and this time the 5 latest posts came up straight away with matching Received times which translated back to about the time when I pressed the "Refresh" button (or maybe it was when I added the feed).
I feel your pain - I agree that it all seems a bit cumbersome for a low volume RSS feed ...
You may also check with the blog author / hosting company and see if they have turned down the Google indexing rate. Google can create high volumes of traffic on a site. Turning down the indexing rate (crawl rate) will help with that but it b0rks Google Reader.
As other posters have mentioned, it could also be a factor of low popularity / low page rank / something else causing Googlebot to fail to crawl the blog frequently enough.
Google Reader display is dependent on Google crawling the blog to pick up the latest content. Realistically, you'll want a client side pull of the RSS feed to get the latest data so you aren't dependent on Google crawling the website. Outlook 2010, Firefox, many others exist. The client side software will directly pull the updated RSS feed from the blog, capturing the posts as they are published to the RSS feed.
Thank you for your responses, I too have come up with some possible solutions (thanks to you).
I don't know whether It's something I did, or independent of that, but as from yesterday (when you answered to this question), feeds started showing up normally.
Maybe it is due to the fact that thanks to you the blog got more subscribers on Google Reader and the Update Rate bounced (just like #Bermo suggested).
Or, maybe the introduction of the PubSubHubBub plugin changed something. But it's rather the first variant (number of subscribers). Though it is still a mystery why other extremely unpopular blogs give me regular articles in Google Reader.
For now I will only upvote good answers, until everything becomes clear (can't really determine the exact cause) or until the last day of this bounty.

Resources