Compare URIs for a search bot? - http

For a search bot, I am working on a design to:
* compare URIs and
* determine which URIs are really the same page
Dealing with redirects and aliases:
Case 1: Redirects
Case 2: Aliases e.g. www
Case 3: URL parameters e.g. sukshma.net/node#parameter
I have two approaches I could follow, one approach is to explicitly check for redirects to catch case #1. Another approach is to "hard code" aliases such as www, works in Case #2. The second approach (hard-code) aliases is brittle. The URL specification for HTTP does not mention the use of www as an alias (RFC 2616)
I also intend to use the Canonical Meta-tag (HTTP/HTML), but if I understand it correctly - I cannot rely on the tag to be there in all cases.
Do share your own experience. Do you know of a reference white paper implementation for detecting duplicates in search bots?

Building your own web crawler is a lot of work. Consider checking out some of the open source spiders already available, like JSpider, OpenWebSpider or many others.

The first case would be solved by simply checking the HTTP status code.
For the 2nd and 3rd cases Wikipedia explains it very well: URL Normalization / Canonicalization.

Related

itemtype with http or better https?

I use like:
itemtype="http://schema.org/ImageObject"
but the request http://schema.org/ImageObject will be forwarded to https://schema.org/ImageObject.
If I change to itemtype="https://schema.org/ImageObject", the Google SDTT shows no problem, but nearly all examples about structured data from Google are with http.
What is best or recommended to use http://schema.org or https://schema.org for itemtype?
From Schema.org’s FAQs:
Q: Should we write https://schema.org or http://schema.org in our markup?
There is a general trend towards using https more widely, and you can already write https://schema.org in your structured data. Over time we will migrate the schema.org site itself towards using https: as the default version of the site and our preferred form in examples. However http://schema.org -based URLs in structured data markup will remain widely understood for the forseeable future and there should be no urgency about migrating existing data. This is a lengthy way of saying that both https://schema.org and http://schema.org are fine.
tl;dr: Both variants are possible.
The purpose of itemtype URIs
Note that the URIs used for itemtype are primarily identifiers, they typically don’t get dereferenced:
If a Microdata consumer doesn’t know what the URI in itemtype="http://schema.org/ImageObject" stands for, this consumer "must not automatically dereference" it.
If a Microdata consumer does know what the URI stands for, this consumer has no need to dereference this URI in the first place.
So, there is no technical reason to prefer the HTTPS variant. User agents won’t dereference this URI (in contrast to URIs specified in href/src attributes), and users can’t click on it. I think there is only one case where the HTTPS variant is useful: if a visitor looks into the source code and copy-pastes the URI to check what the type is about.
I would recommend to stick with the HTTP variant until Schema.org switched everything to HTTPS, most importantly the URI in RDF’a initial context.
The specification of Schema for the type ImageObject indicated:
Canonical URL: http://schema.org/ImageObject
It is probably useful to refer to the canonical URL because it is the “preferred” version of the web page.

Is there a preferred canonical form for the path part of URLs?

All of these URLs are equivalent:
http://rbutterworth.nfshost.com/Me
http://rbutterworth.nfshost.com/Me/
http://rbutterworth.nfshost.com/Me/.
http://rbutterworth.nfshost.com/Me/index
http://rbutterworth.nfshost.com/Me/index.html
The "rel='canonical'" link allows me to specifiy whichever I want.
Is one of those forms considered "better" or "more standard" than the others?
As a maintainer, I personally prefer the first one, as it allows me the freedom to change "Me" to be "Me.php", or change "index.html" to be "index.shtml", or some other form should I ever need to, without having to define redirects, or to change any existing links to this URL. (This isn't specific to "index"; it could be for any web page.)
I.e. using that simplest form avoids publishing what is only an implementation detail that is best hidden from the users.
Unfortunately, of all the forms, my preferred choice is the only one that web servers don't like; they return "HTTP/1.1 301 Moved Permanently" and add the trailing "/".
For directories, is incurring this redirection penalty worth it?
For non-directories, is there any reason I shouldn't continue omitting the suffix?
Added after receiving the answer:
It's nice to know I'm not the only one that thinks omitting suffixes is a good idea.
And I just realized that my problem with directories goes away if I use "directoryname/index" as the canonical form.
Thanks.
For directories, is incurring this redirection penalty worth it?
No.
"The canonical URL for this resource is a 301 redirect to another URL" doesn't make sense.
For non-directories, is there any reason I shouldn't continue omitting the suffix?
No.
There is a reason to omit the suffix: It leaks information about the technologies used to built the site, and makes it harder to change them (i.e. if you moved away from static HTML files to a PHP based system, then you'd need to redirect all your old URLs … or configure your server to process files with a .html extension as PHP (which is possible, but confusing).

Concrete 5 search results page url

Concrete 5 search results page url contains some parameters. how to remove that parameters and make the url user friendly
On an apache server I recommend you to use the mod_rewrite module to use the RewriteEngine.
With this module you can specify aliases for some internal URLs (of course with parameters as well). You can also use RegEx for this.
RewriteEngine on Wikipedia
mod_rewrite tutorial
Short answer: it's probably not worth the trouble.
Long answer...
I'm guessing you see three query parameters when using the search block:
query
search_paths[]
submit
The first parameter is required to make the searches work, but the other two can be dropped. When I build concrete5 themes, I usually "hard-code" the html for the search form, so that I can control which parameters are sent (basically, don't provide a "name" to the submit button, and don't include a "search_paths" hidden field).
The "query" parameter, though, is not going to be easy to get rid of. The problem is that for a search, you're supposed to have a parameter like that in the URL. You could work around this by using javascript -- when the search form is submitted, use some jquery to rewrite the request so it puts that parameter at the end of the URL (for example, http://example.com/search?query=test becomes http://example.com/search/test). Then, as #tuxtimo suggests, you add a rewrite rule to your .htaccess file to take that last piece of the URL and treat it as the ?query parameter that the system expects. But this won't work if the user doesn't have javascript enabled (and hence probably not for Googlebot either, which means that this won't really serve you any SEO purpose -- which I further imagine is the real reason you're asking this question to begin with).
Also, you will run into a lot of trouble if you ever add another page under the page that you show the search results on (because you have the rewrite rule that treats everything after the top-level search page path as a search parameter -- so you can never actually reach an address that exists below that path).
So I'd just make a nice clean search form that only sends the ?query parameter and leave it at that -- I don't think those are really that much less user-friendly than /search-term would be.

RESTful URLs and folders

On the Microformats spec for RESTful URLs:
GET /people/1
return the first record in HTML format
GET /people/1.html
return the first record in HTML format
and /people returns a list of people
So is /people.html the correct way to return a list of people in HTML format?
If you just refer to the URL path extension, then, yes, that scheme is the recommended behavior for content negotiation:
path without extension is a generic URL (e.g. /people for any accepted format)
path with extension is a specific URL (e.g. /people.json as a content-type-specific URL for the JSON data format)
With such a scheme the server can use content negotiation when the generic URL is requested and respond with a specific representation when a specific URL is requested.
Documents that recommend this scheme are among others:
Cool URIs don't change
Cool URIs for the Semantic Web
Content Negotiation: why it is useful, and how to make it work
You have the right idea. Both /people and /people.html would return HTML-formatted lists of people, and /people.json would return a JSON-formatted list of people.
There should be no confusion about this with regard to applying data-type extensions to "folders" in the URLs. In the list of examples, /people/1 is itself used as a folder for various other queries.
It says that GET /people/1.json should return the first record in JSON format. - Which makes sense.
URIs and how you design them have nothing to do with being RESTful or not.
It is a common practice to do what you ask, since that's how the Apache web server works. Let's say you have foo.txt and foo.html and foo.pdf, and ask to GET /foo with no preference (i.e. no Accept: header). A 300 MULTIPLE CHOICES would be returned with a listing of the three files so the user could pick. Because browsers do such marvelous content negotiation, it's hard to link to an example, but here goes: An example shows what it looks like, except for that the reason you see the page in the first place is the different case of the file name ("XSLT" vs "xslt").
But this Apache behaviour is echoed in conventions and different tools, but really it isn't important. You could have people_html or people?format=html or people.html or sandwiches or 123qweazrfvbnhyrewsxc6yhn8uk as the URI which returns people in HTML format. The client doesn't know any of these URIs up front, it's supposed to learn that from other resources. A human could see the result of All People (HTML format) and understand what happens, while ignoring the strange looking URI.
On a closing note, the microformats URL conventions page is absolutely not a spec for RESTful URLs, it's merely guidance on making URIs that apparently are easy to consume by various HTTP libraries for some reason or another, and has nothing to do with REST at all. The guidelines are all perfectly OK, and following them makes your URIs look sane to other people that happen to glance on the URIs (/sandwiches is admittedly odd). But even the cited AtomPub protocol doesn't require entries to live "within" the collection...

Do search engines respect the HTTP header field “Content-Location”?

I was wondering whether search engines respect the HTTP header field Content-Location.
This could be useful, for example, when you want to remove the session ID argument out of the URL:
GET /foo/bar?sid=0123456789 HTTP/1.1
Host: example.com
…
HTTP/1.1 200 OK
Content-Location: http://example.com/foo/bar
…
Clarification:
I don’t want to redirect the request, as removing the session ID would lead to a completely different request and thus probably also a different response. I just want to state that the enclosed response is also available under its “main URL”.
Maybe my example was not a good representation of the intent of my question. So please take a look at What is the purpose of the HTTP header field “Content-Location”?.
I think Google just announced the answer to my question: the canonical link relation for declaring the canonical URL.
Maile Ohye from Google wrote:
MickeyC said...
You should have used the Content-Location header instead, as per:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
"14.14 Content-Location"
#MikeyC: Yes, from a theoretical standpoint that makes sense and we certainly considered it. A few points, however, led us to choose :
Our data showed that the "Content-Location" header is configured improperly on many web sites. Sometimes webmasters provide long, ugly URLs that aren’t even duplicates -- it's probably unintentional. They're likely unaware that their webserver is even sending the Content-Location header.
It would've been extremely time consuming to contact site owners to clean up the Content-Location issues throughout the web. We realized that if we started with a clean slate, we could provide the functionality more quickly. With Microsoft and Yahoo! on-board to support this format, webmasters need to only learn one syntax.
Often webmasters have difficulty configuring their web server headers, but can more easily change their HTML. rel="canonical" seemed like a friendly attribute.
http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html?showComment=1234714860000#c8376597054104610625
Most decent crawlers do follow Content-Location. So, yes, search engines respect the Content-Location header, although that is no guarantee that the URL having the sid parameter will not be on the results page.
In 2009 Google started looking at URIs qualified as rel=canonical in the response body.
Looks like since 2011, links formatted as per RFC5988 are also parsed from the header field Link:. It is also clearly mentioned in the Webmaster Tools FAQ as a valid option.
Guess this is the most up-to-date way of providing search engines some extra hypermedia breadcrumbs to follow - thus allow keeping you to keep them out of the response body when you don't actually need to serve it as content.
In addition to using 'Location' rather than 'Content-Location' use the proper HTTP status code in your response depending on your reason for redirect. Search engines tend to favor permanent redirect (301) status vs temporary (302) status.
Try the "Location:" header instead.

Resources