I was looking at amazon.com and noticed for a product like: "Really Really Really Long Book Title," they will have a URL like: "amazon.com/Really-Long-Book-Title/ref?id=1&anotherId=2,"
and for a short title like: "Success," they will add other words, like the author name: "amazon.com/Success-John-Smith/ref?id=1&anotherId=2." If I remove these words, like so: "amazon.com/ref?id=1&anotherId=2," the URL still resolves.
Does it hurt SEO to have multiple URLs that resolve to the same page?
How are these words even added to the URL? Is it done programmatically, or do they have someone hand-pick words and store them in a database for each product?
I've been trying to expand my knowledge about SEO so I'd really like to learn how this is being done as thoroughly as possible. I'd greatly appreciate the recommendation of any resources, and also advice based on person experience so that if I implement URLs like this, I can do it correctly. I know I can Google this stuff, but there always seems to be 1,000 ways to do something and I'd just to hear some personal recommendations.
For what it's worth, I use asp.net 4.0 (c#) and the IIS7 URL rewrite toolkit.
Thanks a lot!
IIS7 URL rewrite toolkit is a best tool to use in your case. Here are my answers to your questions.
Does it hurt SEO to have multiple URLs that resolve to the same page?
It does not, as long as you show search engines which URL is a primary one. You can do this by adding rel="canonical"in the primary link page. The best example of this is StackOverflow, which is doing very well in terms of SEO. Now, if you use http://stackoverflow.com/questions/5392137/ you will be pointed to this page, if you use http://stackoverflow.com/questions/5392137/url-rewrite-adding-keywords you will be on this page as well. Obviously the second ULR has more keywords, which is great for SEO, and it is more user friendly as well, since users know what the URL is all about.
How are these words even added to the
URL? Is it done programmatically, or
do they have someone hand-pick words
and store them in a database for each
product?
If you are a developer, then it is not your responsibility any more. SEO is 20% technical, 80% marketing(it is my rough calculation, you know the point.:-)). Those marketing folks should handle that after you give them an access to write or rewrite ULRs. They may find some keywords and add some of them in URL based on their tactics. Elad Lachmi gave a good answer on this question. StackOverflow is using question titles as a URL,which is reasonable. Hiring lots of SEOers to find keywords based on different questions and then manually add to URLs is not a good option for SO. But for commercial web sites, it is worthwhile to hire someone to manually do it. The answer is based on what kind of web site yours is.
Hope this helps
I love the rewrite toolkit because you can do ANYTHING!
From my experience, letting the content editor set whatever URL they like is the best option. Computer are not big on semantics. You can create set rules, and they might be ok (It`s not that hard to tell a computer "if the title is not long enough, add the author name"), but since a human adds the products anyway, a little SEO tutorial for the content editors can go a long way. You would be surprised what people who know thier products can come up with. I have seen great titles and URLs done by our content editors, that I would never think of in a million years from my position as a developer.
Related
I need to make a website for my hockey club. My main purpose for this site is allowing people to sign in and post articles and training schedules in their section. Eg Mens, Womens, Juniors and Masters. I want to have some kind of upload manager that will allow them to choose where they post the info too (eg, Mens, Masters and Homepage).
This is the main functionality I'm looking for at the moment.
The clubs previous website used Joombla which I have hated. I found it to be way to restrictive. Its on a old version of it so there are probably many improvements in the new version but from what I've read it seems like it still has a lot of restrictions in how content is managed. I am open to trying it again tho.
I've used Wordpress before and liked it but that was on a small scale projects and I'm not sure it really fits what I'll be trying to do here, since it mostly deals with blog posts and I'll need to have functionality to upload and display files.
I've had a look around at some other ones like Squarespace and Silverstripe. I'm really liking the simplicity of silverstrip(one thing I hate about Joombla is the clutter on the opening page) and am leaning towards it right now if I can find a nice way to have people post news to multiple pages at once.
If anyone has any suggestions they'd be very welcome. I know html, css, javascript and a bit of php. I'm learning Ruby atm so wouldn't be against using it so I could learn more but it might be a bit much for a sports website.
First off, its nice to see someone that likes hockey too :) You can't use Squarespace, you'll need an Apache server for what you want. You will need some way to store information, so you'll need a MySQL database, probably some advanced knowledge of PHP (I'm assuming you don't know how to connect to databases and do some other functions). Wordpress is too limited, so you can't use that. I have never used Silverstripe personally, but it seems like the best of your options here. You'll probably need some more knowledge of PHP before you attempt to make a members system.
I am making a website which allows people to discuss news topics. I was looking to make like a news feed which shows the most talked about topics and topics followed by users however I am not sure how to do this? As in I can't think of a process to do this and I don't think Rss feed's are the answer, help would be appreciated.
Same here. I am developing a website too and learning how to develop an RSS engine of my own.
http://news.bbc.co.uk/2/hi/help/rss/default.stm#mysite
http://www.wikihow.com/Create-an-RSS-Feed
But I need more information. What I know is- RSS feeder itself searches for latest content on the news websites or blogs (by looking at their dates perhaps) and it places the latest post on top. Now the problem is that I am not able to create that. I need to know a lot about RSS and specially XML.
But your problem is different I think. You want to show the trending post on the top. Then, I think you will need to create an algorithm to rank your pages/posts. And this algorithm should evaluate the real hotness of the content. For example a 20 days old post on your website might still be hotter than latest trending news and it might be searched in the top news.
But now the question is how would this algorithm decide whether a post hot or not? Well this can be done on the basis of likes or hearts give to it by users, comments on the page, links in the comments on the page, shares (you can track that), and external links to your post etc etc. Now it's up to you what you will prefer to make your post trending. You can give more weightage to external links or maybe comments or you could set limits to all of these which when reach gives a sign of full success of the post.
Sorry If you don't get it. I was just thinking of the solutions. I really don't know the solution to it already.
I have just been checking the yearly stats for a blog I manage, and there is one post from 2008 that is getting a LOT of views, which doesn't make any sense as the info in it is outdated.
I pulled the access_log entries for this post and am finding a lot of referrers from cials-pills-online.info and sites like that. Not a lot of entries for any one of these sites, but say 20-30 a month.
I have looked around the site and can't see anything obvious amiss. Can anyone tell me where to look and what to look for to see if there's any monkey business related to this post?
Well there is a rather simple way to see if it is really monkey business or not if you dont need it delete the post and resubmit it it really depends on the post what is it on maybe you just did some great SEO or are ranking for a relative keyword. If they are leaving comments maybe its just for backlinks this happens when you have a post that is for example my site is on planes and i make a post on cars so I get people that want backlinks from a site on cars
There are also a lot of great security plugins just search in plugins
I noticed today that SO uses magic URLs in the form. For example, a question is ".../questions/[nnn]/[description]. As an experiment when showing a question I changed the description and hit enter. As expected, it did not affect the request and the question showed just fine, only with a garbage URL:
http://stackoverflow.com/questions/1933822/flksdjfkljlfs
I assume, but could be wrong, that this reflects a RESTful approach to URLs. Since I am in the process of build a new web app, I was wondering, why is this better than than some of the more "traditional" alternatives?
http://stackoverflow.com/questions/1933822
http://stackoverflow.com/questions?Question=1933822
It seems wrong to me, for reasons I can't define, to have a URL with completely redundant and ignored information (the question name).
It's for SEO (Search Engine Optimization) reasons. Google favors URLs that have in them the text of the search query. It's a bit cheating (that is just my personal opinion) but most websites do it, including mine* :)
The URL for this question is:
What's the advantage of URLs with semantically dead components?
Let's say you search for "urls with semantically dead components". Google uses many factors in deciding the order in which results are shown; but let's say that another site has the same exact factors but an URL like www.site.com/question/1000. Google would display stackoverflow first.
*I go a bit further and 301 (permanently redirect) the most popular search engines that crawl /url/id without /text.
This is done for both the reasons mentioned above and also to have only one valid (canonical) URL for search engines.
If you choose to include extra text in your URLs I suggest you follow the same approach since google somewhat penalizes websites with duplicate content, always have one canonical URL, at least for search engines!
Well with the title as part of the URL, it has meaning when read by humans, too.
I think SO uses a nice approach here. It's a combination of user-friendly and computer-friendly information. The service is able to efficiently locate the correct question because of the ID number, while humans can quickly get an understanding of what the web page is about just by looking at the URL before even clicking on it.
I got a problem at my blog. I got visits from kind bots who leave "nice" comments to my blog posts :(
I'm wondering if there is a smarter way to keep them out, besides using the captcha modules.
My problem with the captcha modules is that I thinks they are anoying to the user :(
I don't know if it's any help to anyone but my site is in asp.net mvc beta.
Have you thought about using this?
http://akismet.com/
From their FAQ
When a new comment, trackback, or pingback comes to your blog it is submitted to the Akismet web service which runs hundreds of tests on the comment and returns a thumbs up or thumbs down.
It's a really easy to use system, which I highly recommend.
I've had good luck with Honeypots and Hashes.
By making it difficult for robots to post successfully, you can let users post without registration, captchas, or false positives from akismet.
Have a CAPTCHA that is really simple. Perhaps make it always "orange"? I don't think anyone's done that before.
Akismet is definitely the #1 method I know of for limiting spam comments. Also nice to offload that to a 3rd party (at a reasonable price).. that way if client complains, just 'shift the blame'
Another option is to incorporate something like mod_security's spammer signature file. They have a list of keywords you can scan a comment for and place the message to be moderated if you got a match. Though if you had a message board that actually discussed topics that contain these keywords, you'll need a lot of moderators. :-)
Also may want to consider scanning IP's and matching them against SpamHaus or DCShield's block lists. We recently started this approach and it has done wonders.
Things that don't work: requiring registration, simple captcha's, user agent... these can be automated or defeated with cheap labor.
I think you have several options...
Require registration to post comments - but thats more annoying than captcha, so probably not the best idea
Examine the user-agent of the poster (see here) for something that looks genuine or exclude those which look suspect
Use a nice Captcha. As annoying as they are, used properly they aren't that bad. It took me 7 attempts to sign up for a gmail the other day because i just couldnt read what it said. A nice captcha though isnt that bad really, kept it short and READABLE
If the spam you are receiving is link-heavy you could assume any comment that contains >= 2 links is a spam comment and not post it to the blog unless the blog author approves them. This is what most comment-spam plugins do. I'm currently working on a blog software and I adopted this solution in the interim until I can integrate akismet fully.
I made spam into someone else's problem by using Disqus to run my blog's comments. There has been no spam since switching, Disqus keeps on top of it.
A few answers advised Akismet but I disagree and consider dynamic captcha approach the best one