The IANA registry contains a official link relation type of "related"
related: Identifies a related resource.
https://www.iana.org/assignments/link-relations/link-relations.xhtml
I have also read the referenced RFC4287.
The value "related" signifies that the IRI in the value of the
href attribute identifies a resource related to the resource
described by the containing element. For example, the feed for a
site that discusses the performance of the search engine at
"http://search.example.com" might contain, as a child of
atom:feed:
<link rel="related" href="http://search.example.com/"/>
An identical link might appear as a child of any atom:entry whose
content contains a discussion of that same search engine.
But that only seemed more confusing to me. Aren't all links related? After all rel = relation.
Can anyone try to clarify this and give valid use cases for rel="related"? Is it just a catch all relation type?
It’s a generic “this link is related to this entry” link relation.
The related link relation serves three use cases:
Link an entry to a related entry from the same publisher. Some content management systems can keyword-match entry archives and find related/suggested/recommended entries.
Link to a related external document. E.g. an entry discussing Facebook’s privacy policy could link to it and a Facebook blog post announcing a policy change. (You can include multiple links with the same relation.) This use cases is intended to enable link-clustering use cases. E.g. grouping entries discussing the same thing/link from different feeds, or rank a “hot topic” at the top of the feed list (like the Fever feed reader did).
Blogs doing reading-list/daily-digests/link-curations/etc. can link to all the external documents they recommend.
The two last use cases are expressions of the Semantic Web. This was one of the ways we were supposed to get personalized set of trending topics/links/things from our feed subscription. Twitter and Google Now is the global arbitrators of this today.
However, this link relation can also be used by feed readers to display a list of related links from the publisher. The link:title attribute can set the link title to present them to people.
Related
Posts made via the Share on LinkedIn API for users on the new user interface appear on their accounts with at most: a user message/comment, image, title, and link domain. However, the documentation on the Share on LinkedIn API (https://developer.linkedin.com/docs/share-on-linkedin) describes that the request body can also contain a "description" field with text up to 256 characters. When the description and all the post fields are provided explicitly to the API (as in the example in the documentation), the description field does not appear for users on the new UI. The description field did appear for users when they were on the old UI.
The Share on LinkedIn API provides an additional option for sharing by omitting the post details fields (title, image, description), and allowing LinkedIn to generate the post based on the Open Graph data it finds at the link URL. However, the result is the same as above for users on the new UI.
Is this a bug, or is the documentation out-of-date?
These are your only options. Choose one:
Share image.
Share description.
If you try to share both, you will only see an image.
I argued with LinkedIn support about this for two weeks I've also directly contacted many of the developers. They have agreed that this is the logic and this is how it is designed to work. I tested this theory out on Wikipedia (has a description, no image) and GitHub (has both description and image). Results:
Wikipedia: Description ONLY -- Works (but no image)!
GitHub: Image ONLY -- Works (but no description)!
I made test site, with only description, to verify this, and it appears confirmed.
Sure, the Official Microsoft LinkedIn Share Documentation makes no mention whatsoever of this, but once again, reality is in direct conflict with Microsoft's understanding.
I'll use StackOverflow as an example.
A user can reach a question/answer page from
outside of stackoverflow
from another page of stackoverflow
from a search result
from a link in other posts (link in another question or answer)
from Similar Questions section
from a user profile page
I'd like to know how those internal links are used.
Main question is What are the percentages of each type of links which led users to the Q/A page in stackoverflow
I want to know the answer for the Q/A pages as a whole not for each individual Q/A page.
Is this implementable using GA and if so, I'd like to hear a general guide so I can dig in.
Is there a term for this kind of analysis? (internal link analysis? Knowning a term helps me to google further..)
Edit
I found one way to do this using sitesearch.
http://cutroni.com/blog/2010/03/30/tracking-internal-campaigns-with-google-analytics/
It's from 2010, and not sure its still the best way to do it.
To be able to tell different links from the same page e.g. you will need to setup enhanced link attribution by requiring the plugin via this command
ga('require', 'linkid', 'linkid.js');
the plugin also requires decorating each link that reffers to the same destination (the question) a unique id. you can also chose to decorate a container element such as a div which holds link or its parent (up to 5 levels)
there are a number of ways to get at this data.
One way is a under reporting look at Behavior>Behavior Flow. The view crates a sunkey diagram. which you can narrow down using a custom segment + creating a content grouping. The advantage of the Behavior flow is that it is visual - but it is difficult to customize.
Another approach you could take is to locate the question in the Behavior > Site Content>All pages and the set the secondary dimension to "Previous Page Path". You can use the advanced filter to select a specific question, and to limit the previous pages to page paths matching the pattern for each type of page you discussed.
To view the attribution for different links you need to select the In-Page Analytics tab.
FYI, I've implemented it using Google tag manager.
I defined event navigateToQnA.
And fired the event with different event action for different type of clicks I care about.
Maybe bit laborious than the sitesearch method I linked in the question.
But cleaner in a sense that you don't pollute url parameters to collect the data.
In Google reader(R.I.P) we could select some interesting links by a special tag and then make public them and show links on our blogs or websites.
Is there a way to create this by Google reader alternatives like Inoreader or Feedly or AOL reader or etc?
I should probably start by saying that I'm the BDFL of Inoreader, but I feel obliged to answer you. If anyone thinks my answer is inappropriate or that this can be achieved with one of the other mentioned options, feel free to bash me in the comments :)
Yes, you can do that in Inoreader.
Since you are familiar with Google Reader, you shouldn't have much difficulties starting up with it, but if you have, here's a quick guide to get you started.
Depending on what you need to achieve the option you want is accessible via right-click on a folder or a tag:
Then in the dialog that pops up, you will see an Export option. Click it and you will get 3 links - for RSS feed, HTML page (what you need) and a public OPML file (for folders only):
A few notes on folders and tags:
Folders are used to group sources (RSS, social and other feeds) and content inside them is automatically populated from the feeds.
Tags on the other hand are mostly manually populated by you. When you read an article and you find it interesting, you can press "T" or click the label icon at the bottom of article to tag it. This behavior is almost identical in all major RSS readers. Working with tags in Inoreader is covered in detail in this blog post.
Now I said mostly before, because tags can also be automatically populated by Inoreader's Rules. Basically they works like your email filter. You can set up keywords or other conditions and tag articles automatically as they arrive. This feature is covered in this blog post.
Hope this helps!
Since this page (https://www.freebase.com/policies) doesn't work and since credits data like wikipedia uri and statement, seems to be suddenly disappeared from topic queries (example: https://www.googleapis.com/freebase/v1/topic/m/017n9?filter=/common/topic/description), i would like to know how to credit freebase topics that contain text from wikipedia.
Is it enough to put links to freebase and wikipedia homepages with their respective licenses somewhere in my site or i must link every single topic to it's wikipedia page?
Thanks.
When I make a substantial update (not just correcting a typo) to an article in my blog, I want to ensure that readers see the updated article again in their news feed. From what I have read, here are some of the options I see:
Create an entirely new article (largely a duplicate of the original). Apparently a bad idea -- duplicate content would be bad for SEO.
Change the published and/or updated timestamp of the article. It seems that, in most readers, this will not make the article show up as unread.
Change the RSS item GUID or Atom entry id. This is a big NO-NO according to the Atom specs, but I'm not sure about RSS.
So, there doesn't seem to be a good option, unless I'm missing something.
What are the ramifications of changing RSS item GUID or Atom entry id? Are the Feed Police going to show up at my door for changing an article ID?
updating the "updated" field for that entry should be correct. Do not forget to also update "updated" field for feed itself, any Etags/last-modified HTTP headers (if existing but not auto-generated), and wait/force reader to actually do the refresh.
if you still have the problems with some of the readers you should check with feed reading software authors to see if that is intentional.
As for the second part, changing id won't get Feed Police on your door, but if it happens often enough, such articles which would show as duplicate could annoy your followers to just ignore/drop the feed.
see this and this answers too
The RSS <guid> or Atom <id> is an element used to uniquely identify its parent item. Feed readers and aggregators use this field to determine if the item has already been downloaded or fetched.
If you change an RSS <guid> or Atom <id>, then readers and aggregators may use this as a signal or flag that the item is to be downloaded again because the GUID or ID held previously no longer matches what it has in its database or lookup.
Changing the GUID or ID is not a way to force an update in place. It's a way to say, "I have something brand new for you to download/fetch".
In RSS, if you add ?fake=parameter to the GUID that can be a substitute way to force a new download. But the old fetched item will still remain because it doesn't share the GUID.
You can't reliably force a download via RSS or Atom using the publish or updated date.
Best you can do is to change the contents of the item and allow readers or aggregators to update as they wish, as not all work the same in what they do when they see a change in content like this for an item it already has.
As both answers at this point state: there is no perfect way. Changing the guid will make everyone believe that the content is brand new, hence probably creating duplicate content, and chaging just the element will probably not always trigger a full refresh.
Using PubSubHubbub may help as it is fat pings. Wich means that the subscriber will get the updated data right away and can store it under the same key/unique id that the previous version.