Flex 3: Project Architecture & SEO - apache-flex

I've got a Flex 3 project. One of the problems I have is that not very much of its content is indexed by Google. Currently, I pull data from a mySQl database, so the Googlebot doesn't see most of the site.
My goal is to increase the amount of content indexed by Google, improve the SEO, and improve SERPs.
I thought that instead of pulling the data from the database that I would change the project's architecture and create separate "pages". So, in my case, I would compile each puzzle separately and upload it to the server in its own directory. This way the info in each puzzle would get indexed.
The negative is that if I add a puzzle, I'd have to add a link to it in all of the puzzles that are already on the server. I would have to add the link, re-compile each puzzle and upload it to the server. Is there a way to get around this problem? Also, if I wanted to communicate some data from one puzzle to another in the future, I wouldn't be able to do so.
Any suggestions?
Thank you.
-Laxmidi

The usual way to achieve this goal is to develop a hidden parallel site in HTML.
On the first page you will have your flash and, hidden by javascript, a list of links to the other pages. These links will be parsed by the robots. Ideally, the href pages are virtual (look for "url rewriting"). On each "fake" page, your server-side language will print on the page a content or links from your database AND the flash. The flash will be provided with a string explaining where it is and what it's supposed to show.
Ex: http://www.mysite.com/category1/content7 The URL rewriting sends this request to http://www.mysite.com/index.php?uri=category1/content7. The page should display the Flash with FlashVar "uri=category1/content7". The Flash knows which content it has to display so when an user comes from google, following this link, he will find the content he was looking for.
Every linking and content for SEO should be in HTML, don't trust robots capability of reading Flash.

have a look at Adobe's reference on deep-linking.
you can generate a website's sitemap.xml with a cron process (daily), such that the URLs encode the state of the application you need. This URL will encode whatever content you need to retrieve from the db, with just one index.html page.
good luck!

Related

How to refresh facebook scrape cache for a whole site

I need to re-scrape facebook's cache for every page in my web site (3000+ pages)
The only way i know how to do that is too tough Open graph debugger
I Cannot run this with 3000...
I read From Facebook developer support page that this (StackOverflow) is the place to ask questions but there is little to none knowledge about refreshing facebook url cache
Can you please suggest any working solution to re-scrape a page?
my web site: Mentallica
One possible answer, given the number of URL's you've got, is to use the batch invalidator. You could go to an access list of your URL's, or maybe do a recursive directory listing and replace the folders with URL's (if it's a flat site), or the like. At least, you don't have to do them one at a time. Once you have a list, paste the list into the invalidator (multiple lines).
The batch invalidator is here:
https://developers.facebook.com/tools/debug/sharing/batch/
It IS frustrating. I've searched several places, and don't really see a solution. We have a website with all of the proper tags, yet FB refuses to refresh any past posts with the new website data.
3 years later, but this can help someone: Paste yout url here and click fetch new scrape information: https://developers.facebook.com/tools/debug/og/object/

Find the actual page opened from the URL in the asp.net

HI so i keep running across websites which when looked through or searched (using their own search function) return's a static URL ie.) ?id=16 or default.aspx no mater what page of the website you visit after the search has been performed. This becomes a problem when i want to go directly to a post/page within one of these sites so i'm wondering. If anyone knows How could i actually find out what the absolute URL is.
So that i can navigate straight to it. I'm not really familiar with coding but have tried looking in the page source but i wasn't really able to gleam anything from there.
The basics around asp.net urls: http://www.codeproject.com/Articles/142013/There-is-something-about-Paths-for-Asp-net-beginne
It all really depends on what you're trying to find, as far as finding a backway to locate a absolute path, is highly doubtful. If the owner of the site(most blogs) want you to have a perma link to a page, they use url-rewriting for putting things in the URI like title page and such. Alot of MVC sites do this now.
The '?id=16' you're seeing is just a query string, a holder for other logic they are doing.

Can I track who is linking or manipulating my site's data?

Is it possible to track if someone links to data on my site? Specifically if my data is used in a site dynamically generated by a developer program? I would like to know if someone is blatantly passing off my site's data as their own. There are obviously ways around directly linking to content, such as content manipulation or even manual manipulation. But if someone where to link(or directly add word for word or manipulate) my content into their website, is there a way to track it?
Can I avoid someone being able to scrape my website at all, or is everything just up for grabs?
the best answer and the easy one is called GOOGLE - WEBMASTER TOOLS!
HERE
actually doing that is very hard and you would need to crawl the web to discover those links that address to your pages... dynamic content as well is linked so it would be find by google as well.
this tool will allow you to see outer links that address to your site.. and you can check them.
for extra - you can monitor requests and traffic to your site and find ip's that are using the same page over and over again. that can tell u that an outer page is dynamically loading content from your web page.
EDIT:
here is a good article in this subject: link - scroll down and you can see the use of google
webmaster tool with some other progrmas and method.
here is a good start guide to the google webmaster: link
ENJOY!

How to implement internal/powerful site search functionality ?

I have developed a site for our company, till now there is no search functionality in it. Now we are thinking to develop a site's search facility. Most of the times our page content comes from db. We have HTML editor by which our employee enter HTML content to db and later that content is shown on page but few things are still static, which means few things are hardcoded in the page, those are also important like menu content etc. now i want that when user put some word for search then that will search against database and file because that word may be hard coded in file. so guide me how to develop this kind of search where search will be based on file & database. if possible discuss here and also drive me toward article. thanks
As a starting point, the following MSDN code sample shows how to implement a search engine that can search contents (articles or posts...) in an ASP.NET website. It actually searches DB content...
Implement Search Engine in ASP.NET (CSASPNETSearchEngine)

How to attach a site with its thumbnail to a Drupal node?

Do you have any Drupal module (or other solution) to implement a feature similiar to Facebook's Share a Link?
To be precise:
you paste a link
site's preview is generated
title
short excerpt
and a thumbnail of one of the site's images
You'll need to do some pretty fancy stuff when snagging that thumbnail.
That's parsing the page and picking out thumbnails that might want to get used from the tags on the page.
It will need to do this via javascript after the link has been placed.
Facebook actually caches their thumbnails for page sharing once a day, so they choose not to go grab it at run time for the client every time.
There are certainly libraries (and maybe a jQuery plugin that would let you slurp a URL into memory then traverse it and present some one the fly images.
Check out the Tumblr Share tool. You might be able to reverse engineer from that.
As for Drupal modules this seems unlikely. Would love to hear it though.
You could also think about a third party screen shot service, but that's a pain too.

Resources