I am curious if is out-of-date to use query string for id. We have webapp running on Net 2.0. When we display detail of something (can be product) we use query string like this : http://www.somesite.com/Shop/Product/Detail.aspx?ProductId=100
We use query string for reason that user can save the link somewhere and come back any time later. I suppose that we use url rewriting soon or later but in mean time I would like to know your opinion. Thanks.Cheers, X.
A common strategy is to use an item ID in the URL, coupled with some keywords that describe the item. This is good from a user's perspective, because they can easily see what a URL refers to if they save it somewhere. More importantly, it's useful from a SEO (Search Engine Optimisation) point of view, as search engines will - it is said - rate a given URL more highly if it contains the keywords someone is searching for.
You can see this approach on this very site, where the ID after 'questions' is used for the database query and the text is purely for the benefit of users and search engines.
Whether you use a straightforward query string, or a more advanced approach that makes the ID look like part of the folder path, is up to you. It's largely a matter of personal taste.
Yes, it is old fashioned!
However, if you are thinking about changing it to a RESTful implementation as others have suggested, then you should continue to support the old URL and querystring addresses by implementing an HTTP 301 redirect to forward from the querystring URLs, to the new restful URLs. This will ensure that any users old links and bookmarks will continue to work while telling the search engine bots that the url has changed.
Since your post is tagged ASP.Net, there is a good write-up on how you can support both, using the new ASP.Net routing mechanism here: http://msdn.microsoft.com/en-us/magazine/dd347546.aspx
Nothing wrong with query string parameters. Simple to create and understand. A lot of sites are using fancy urls like 'www.somesite.com/Shop/Product/white_sox_t_shirt` which is cool and sort-of user friendly, but more work for us poor developers.
Using query strings is not outdated at all, it just has to be used in the right places. However, never place anything in the query string that could be a security issue and remember that anything you read from the query string could have been modified so you should be validating all input in your checks.
It's not outdated, but anothter alternative is a more RESTful approach:
yourwebsite.com/products/100/usb-coffee-maker
The reason is that a) search engines usually ignore any URL with a QueryString (so the product.aspx?id=100 page may never get indexed) and b) having the name in the url purely for display purposes supposedly helps SEO as well.
Permanent links are best for SEO and also , what if your product moved to another database , and the ID of the product needs to be changed ?
I don't think the chances of a product's name will be changed or the manufacturer.
E.g Apple/Iphone won't change :) Seems to me a good Permalink
Related
I am currently trying to code out a simple asp.net URL shortener which allows me to customise the shortened url. I am also not allowed to use open source, which means I cannot use any of the url shortening services. I am required to develop on on my own.
But this is the first time I am doing this so i have no idea on how to start(excluding the UI).
I understand that there are already such questions being asked. But I've read through the posts and I couldn't understand what is it about. I've also tried to google for the solution but it doesn't seem to be working.
I would really appreciate any help given to me.
P.S I am fairly new in programming and not strong in any of the programming languages.
You would need:
A system to store pairs of shortened URLs and their full version.
A page which takes the shortened URL parameter (eg. short.aspx?q=SHORTENED), looks it up in your data store, and redirects to the full URL.
Some interface to edit your data store, add new URLs, etcetera.
That should be it really. If this is too difficult, it might be smarter to start on a basic programming course first.
I've been reading up on SEO and how to construct my links in terms of getting better SERP.
I'm using WordPress as the framework for my site and have custom templates retrieving data from my DB.
What makes a URL dynamic, is the usage of ? and &. Nothing more, nothing less. Google recommends that I should not have too many attributes in my URL - and that's understandable.
Dynamic: www.mysite.com/?id=123&name=some+store+name&city=london
Static: www.mysite.com/london/some+store+name/123
Q1: I don't feel that adding the store ID in this static URL looks nice. But I do need it in order to fetch data from the DB, right?
Reading various blogs, I see many SEO (experts) saying different things, but I feel most of it is just talk without actually proving their statements. We can all agree that static URLs are good in terms of usability (and readability).
Q2: But many claim that static URLs prevent duplicate content. I don't agree on that as all my contents have unique ID. Can anyone comment on this?
Q3: In the end, for the Google search engine (and others) it really doesn't matter if the URL is static or dynamic. But since Google is working towards user friendly content, is that the only argument for having static URLs?
1) There's no problem using DB ids alongside static URLs. Many huge e-commerce and other commercial sites do this (Amazon, eBay... hell, everyone really.)
2) A static URL in and of itself does not prevent duplicate content. There are hundreds of ways duplicates can happen (child pages, external copy, javascript, form fields, ajax, archive sections... the list foes on.)
3) It doesn't matter if it's static or dynamic for indexing. But in terms of ranking well, static URLs with informative (and relevant to the targeted keywords) searches are hugely beneficial. Multivariate testing I've done shows users are also generally re-assured by clean looking URLs in terms of usability.
If you give me some more examples, I can probably help out a bit more.
Urls without parameters are always better. It won't absolutely kill SEO - but it is better not to have them.
!0 years ago Google would ignore parameters and would penalize you for URLs with parameters. Today they are really good at figuring out these db parameters - but not perfect. Among other things Google has to try to figure out which URL parameters matter, and which don't and if parameter order matters.
E.g. you may have URL parameters that store user preferences, navigation state etc. This will just proliferate URLs that Google has to try to decode. So what you should do is:
Right before generating an URL at least sort your parameters.
Convert parameters that matter into things that don't look like parameters. So if I had a shoe store with a urls like http://mystore.com/mypage?category=boots&brand=great&color=red I'd rewrite that to something like http://mystore.com/mypage/category/boots/brand/great/color/red or even better:
http://myscore.com/mypage/boots/great/red
Then you can add the parameters that don't matter for the page content at the end. Google will figure out they don't matter.
The other reason to fix your URLs is that Goolge displays them to users in the SERP, and people are more likely to click on readable URLs than database URLs.
Why do big stores like amazon use database urls? because they are giant, bad urls don't hurt them, and their systems are so large and complex it is the only way to manage it. But for smaller sites with fewer products, readble URLs are achievable and are one of the few advantages a small site can have over a big one.
If anyone observing closely Google SERP results definitely find some part of SERP results are highlighted and bold as well. Now noticing further one can easily find "Search Query" are getting highlighted or bold in "Title" , "Descriptions" and "URL" who are using same "Search Query" in Title, Descriptions and URL as well.
Now thing is if any website URL's are dynamic and coming with parameter ID, they are loosing keywords from Title, Descriptions and URL as well.
Ex:
http://www.johnzaccheofineart.com/catagory-2/?id=4
http://www.johnzaccheofineart.com/painting/johnzaccheo
Sample Search : Painting for Sale
Now easily we can understand difference between static and dynamic URL performance. One URL coming with such word which has no search value, other URL is coming with category name as well as painter name.
So, being a user i will give preference to 2nd one which is understandable from URL itself.
I'm developing a software, which is going to provide in-deep information about url's.
While the get-params are simple, I'm having trouble with the hash.
At first it was used to mark places in the document to navigate to, but we're past that now. I've seen JS engines using it to store params similar to the get strings.
So, here's my question: is everything that comes after a hash free game, or are there any conventions about what it should look like?
Try these sites it could help. Fragment Identifier, Wikipedia or Pound Sign, Google
It's got a list of examples you could use.
It all depends on what you need. Hashes are used in modern web applications that make use of asynchronous calls to the server using ajax. This e.g. allows the user to copy the link and receive the same content after pasting (actions taken are put into hash which changes the url which otherwise would remain static).
You want to read http://www.jenitennison.com/blog/node/154
I have been wondering how tiny url works.
I would like to develop something similar for my site, but as most people, I use GUIDs for ids. When an object is created, should I then generate a 10 character random string to use as public id, or is there a smarter approach?
Example of old url: www.mysite.com/default.aspx?userId={id}
Example of new url: www.mysite.com/pwzd4r9niy
You can use any kind of random string generator or GUID for this. I don't think there is a much smarter approach. (Palantir offers a nice alternative though, hashing the incoming URL. )
The rest is relatively straightforward: Keep a database table with IDs and target URLs; When a request comes in, look up the ID and do a header redirect to the target URL.
More discussion in this blog post.
There also are redirection services out there now that use words from a dictionary list to build a URL.
Sadly, EvilURL is gone! It used to create "short" URLs like
http://evilURL.com/donkey_porn-shotguns/cracking-virus-exploit
that was the only URL redirection service really worthwhile. :)
And, as a bit of trivia, http://to is the shortest redirection service (and, I think the shortest web URL) known to man.
Just hash the entire string, to a reasonable length.
How do you properly ensure that a user isnt tampering with querystring values or action url values? For example, you might have a Delete Comment action on your CommentController which takes a CommentID. The action url might look like /Comments/Delete/3 to delete the comment with the id 3.
Now obviously you dont want anyone to be able to delete comment 3. Normally on the owner of the comment or an admin has permission to do so. Ive seen this security enforced different ways and would like to know how some of you do it.
Do you make multiple Database calls to retrieve the comment and check that the author of the comment matches the user invoking the delete action?
Do you instead pass the CommentID and the UserID down to the stored procedure who does the delete and do a Delete where UserID and CommentID equal the values passed in?
Is it better to encrypt the query string values?
You don't.
It is a cardinal rule of programming, especially in this day and age, that you never trust any input which comes from the user, the browser, the client, etc.
It is also a cardinal rule of programming that you should probably not try to implement encryption and security yourself, unless you really know what you are doing. And even if you do know what you are doing, you will only remain one step ahead of the tard-crackers. The smart ones are still going to laugh at you.
Do the extra query to ensure the logged-in user has the right set of permissions. That will make everyone's lives just that much simpler.
Enrypting and decrypting query params is a trivial process and there are some great examples of how to do so using an HttpModule here on StackOverflow.
"You Don't", "You can't", or "It's not easy" are simply not acceptable responses in this day and age...
Vyrotek: The input method is not important. GET, POST, encrypted/obfuscated GET - no real difference. No matter the way your application receives commands, to perform an administrative action it must make sure that the issuing user is allowed to do the stuff he wants. The permission check must take place AFTER the command is received and BEFORE it gets executed. Otherwise it's no security at all.
Consider using technique outlined in Stephen Walther's article Tip #46 – Don’t use Delete Links because they create Security Holes which uses [AcceptVerbs(HttpVerbs.Delete)]
You can also allow only Post requests to Delete controller action by using the Accept Verbs attribute as seen below.
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Delete(int? id)
{
//Delete
}
Then you could also use the antiforgery token as discussed here:
http://blog.codeville.net/2008/09/01/prevent-cross-site-request-forgery-csrf-using-aspnet-mvcs-antiforgerytoken-helper/
I've done funky things take the querystring, compress it, Base64 or just hex encode it, so that "commentid=4&userid=12345" becomes "code=1a2b23de12769"
It's basically "Security through obscurity" but it does make a lot of work for someone trying to hack the site.
You cannot easily do this.
I have fond memories of a site that used action urls to do deletes.
All was good until they started search crawling the intranet.
Ooops, goodbye data.
I would recommend a solution whereby you do not use querystrings for anything you do not wish to be edited.