PDF protection and SEO in classic asp - asp-classic

I have a classic asp website where I am selling pdfs. Ocne user pays, I give them a link to download the pdf like this:
https://mysite.com/products/ebook/mypdf.pdf
I want to protect it from
(1) the search engines indexing
(2) people accessing it directly without buying it.
How can I do it?
Please suggest

You have to provide an extra ASP page, e.g. getpdf.asp to download the content.
See this answer, this covers most of what you need:
https://stackoverflow.com/a/12946733/911635
You will have to add some access control to check if the current user has access to the file.

Related

How to implement internal/powerful site search functionality ?

I have developed a site for our company, till now there is no search functionality in it. Now we are thinking to develop a site's search facility. Most of the times our page content comes from db. We have HTML editor by which our employee enter HTML content to db and later that content is shown on page but few things are still static, which means few things are hardcoded in the page, those are also important like menu content etc. now i want that when user put some word for search then that will search against database and file because that word may be hard coded in file. so guide me how to develop this kind of search where search will be based on file & database. if possible discuss here and also drive me toward article. thanks
As a starting point, the following MSDN code sample shows how to implement a search engine that can search contents (articles or posts...) in an ASP.NET website. It actually searches DB content...
Implement Search Engine in ASP.NET (CSASPNETSearchEngine)

Restrict user to a single window

In a project I'm working on (ASP .Net 3.5 web forms), there is a requirement to restrict the user to work in only 1 window/tab at time. I found this post detailing a solution: http://www.codeproject.com/KB/aspnet/MultipleTabWindows.aspx
However, in one of the pages of my project there is a requirement to open a private (related to the logged in user) pdf document in a new window. The way I'm doing it is by building a request to a page inside of my project and, from that page, stream the pdf document. So, the url of my document looks something like: http://localhost:4087/PdfPage.aspx?type=1&id=2
Q: is there a way to bypass the "single window" rule for only the pdf page or should I say "No, the only way is by opening the pdf in the same window"?
Thanks in advance
When I used the example I put the code on the master page that most of the pages use. Some of the exceptions are links to pdf documents and the login page and assorted error pages.
If that doesn't work you could add logic to the javascript block to look at window.location to allow certain pages through.
Someone needs to say it, implementing any kind of security through javascript is inherently weak. All this really gets you is a short cut to state-management.
Under ideal conditions you should work with your client to make them receptive to the advice their IT department has to offer, instead of them mandating implementation whatever feature they see someone else use. Easier said than done, I know.
Best of luck!!

Flex 3: Project Architecture & SEO

I've got a Flex 3 project. One of the problems I have is that not very much of its content is indexed by Google. Currently, I pull data from a mySQl database, so the Googlebot doesn't see most of the site.
My goal is to increase the amount of content indexed by Google, improve the SEO, and improve SERPs.
I thought that instead of pulling the data from the database that I would change the project's architecture and create separate "pages". So, in my case, I would compile each puzzle separately and upload it to the server in its own directory. This way the info in each puzzle would get indexed.
The negative is that if I add a puzzle, I'd have to add a link to it in all of the puzzles that are already on the server. I would have to add the link, re-compile each puzzle and upload it to the server. Is there a way to get around this problem? Also, if I wanted to communicate some data from one puzzle to another in the future, I wouldn't be able to do so.
Any suggestions?
Thank you.
-Laxmidi
The usual way to achieve this goal is to develop a hidden parallel site in HTML.
On the first page you will have your flash and, hidden by javascript, a list of links to the other pages. These links will be parsed by the robots. Ideally, the href pages are virtual (look for "url rewriting"). On each "fake" page, your server-side language will print on the page a content or links from your database AND the flash. The flash will be provided with a string explaining where it is and what it's supposed to show.
Ex: http://www.mysite.com/category1/content7 The URL rewriting sends this request to http://www.mysite.com/index.php?uri=category1/content7. The page should display the Flash with FlashVar "uri=category1/content7". The Flash knows which content it has to display so when an user comes from google, following this link, he will find the content he was looking for.
Every linking and content for SEO should be in HTML, don't trust robots capability of reading Flash.
have a look at Adobe's reference on deep-linking.
you can generate a website's sitemap.xml with a cron process (daily), such that the URLs encode the state of the application you need. This URL will encode whatever content you need to retrieve from the db, with just one index.html page.
good luck!

Add-ons or libraries for adding user comment functionality to an existing ASP.NET page?

I have an ASP.NET site which I'd like to add page-level comments to without having to change everything over to a blog/CMS platform like BlogEngine.net, Wordpress, Umbraco, etc. Does anyone know of an add-on or library, either free or for-purchase, which can be added to certain ASPX pages to enable a stream of user comments at the end of the page?
The rest of the ASPX page needs to be able to have ASP.NET form controls, jquery and in some cases postback functionality I've written specific to the page which is why a simple blogging page is not enough as it means the blog engine owns the page aside from static content the site owner can enter.
Ideally there would be a way I could add a code snippet or user control to each ASPX page where I wanted comments and then they would show up and be managed independently on each page. I'd like to have the ability for users who post comments to be emailed when additional comments or replies are posted to their comment.
I am currently not locked into a particular authentication method so that is not necessarily a limiting factor.
This seems fun, I've not used it yet so I can't say if it works well or not.
http://developers.facebook.com/news.php?blog=1&story=198

how to get search engines to understand a DB driven asp.net site

All,
This would seem like a fairly basic asp.net question - but in all my years of coding, I've never really thought about it.
Say you have a asp.net 2.0 site with only a masterpage and a default.aspx and its a blog that saves all the data into the database. Links on the side are generated automatically. So ... the URL is always just http://www.XXXXX.com/default.aspx.
So, with that being the case, what do you need to do so that ... say google ... knows about all the different blog entries and links directly to the entries instead of just the base URL?
Is it as simple as changing the forms method to: method="get"?
Thanks, L. Lee Saunders
There are at least two solutions:
Search engines understand query strings, so just add the article IDs to the URLs in your anchor tags -- no need to even use a form control.
Use URL rewriting to expose one set of URLs to the outside world (like /article-title/1234/) in your anchor tags, and then modify the URL to be default.aspx when it arrives at your site; the page could then pull the article to be displayed from any number of places, including but not limited to a query string.
You could have a REST webservice so that you can just use urls to navigate the site, and perhaps have a front page with some new posts, so that the spider can navigate the site..
As an example, look at the urls for SO, it is easy for a spider to navigate this database-driven website.
Create a page that just serves up XML Sitemap (the data obviously being pulled from your database) and submit the sitemap to Google.
Google will then index any links in your sitemap.
(This assumes that these is some difference between each article - e.g. a Querystring key/value).
Useful Link(s):
Web Sitemap Generators
Google Sitemap Validator
Google Sitemaps for ASP.NET 2.0 (there are about a gazillion interesting links off the back of this as well).
some sort of URL rewriting may be an answer
I wouldn't recommend a postback for your situation, it can get ugly for refreshes etc. So, yes, change the method to "get"
Then, say your page of, default.aspx?postid=12345 will get translated into /mm/dd/yy/this-is-my-post.aspx

Resources