Do you have any Drupal module (or other solution) to implement a feature similiar to Facebook's Share a Link?
To be precise:
you paste a link
site's preview is generated
title
short excerpt
and a thumbnail of one of the site's images
You'll need to do some pretty fancy stuff when snagging that thumbnail.
That's parsing the page and picking out thumbnails that might want to get used from the tags on the page.
It will need to do this via javascript after the link has been placed.
Facebook actually caches their thumbnails for page sharing once a day, so they choose not to go grab it at run time for the client every time.
There are certainly libraries (and maybe a jQuery plugin that would let you slurp a URL into memory then traverse it and present some one the fly images.
Check out the Tumblr Share tool. You might be able to reverse engineer from that.
As for Drupal modules this seems unlikely. Would love to hear it though.
You could also think about a third party screen shot service, but that's a pain too.
Related
I have created some pages in wordpress using Visual Composer Plugin. When I try to share those pages on social media sites it shows the visual composer tags as well. Is there any way to get rid of these tags ?
The tags come like
MY PAGE TITLE
[vc_row type=
my page text
The Visual Composer tags are simply arbitrary bits of text that the VS framework picks up and via JavaScript converts into the visual layout that you see.
To demonstrate this edit a page and click the 'Classic Mode' button and this will show you the 'real' content / markup of your page. As a result when sharing the page via social media that network looks for key info to grab for your page to then display.
All the major networks have some form of metadata that they look for first. They use these special tags to see what content you want them to use for your site. If these tags aren't present on your site, they will simply grab content from the page markup. Hence, the Visual Composer tags get picked up.
So, what you need to do is make your site use the necessary metadata / tags for each of the networks, and then populate these fields without these Vis Composer tags.
The easiest way to do this is to use an existing plugin which takes care of this for you. You typically find this in any decent SEO software. You'll in 99.9% of cases want an SEO plugin for WordPress anyway.
By far the most popular and in my experience the best SEO plugin is Yoast SEO. It's a free plugin which not only gives great easy to use SEO tools, it also allows you to set custom description fields for pages/ posts. This means that you can remove the VC tags, and so they will no longer show up on Search Engines and Social Networks. Plus depending on which version you use, you can also customise things on a per social network basis.
Facebook and Twitter etc all use heavy caching against shared content. So, if a page is shared and then edited, the same content will still be shared for a period of time. They don't actually grab live copies every time a page is shared. This can make it a little difficult when developing this side of your site.
Luckily, there is a tool to help with Facebook and here's the one for Twitter.
Is it possible to track if someone links to data on my site? Specifically if my data is used in a site dynamically generated by a developer program? I would like to know if someone is blatantly passing off my site's data as their own. There are obviously ways around directly linking to content, such as content manipulation or even manual manipulation. But if someone where to link(or directly add word for word or manipulate) my content into their website, is there a way to track it?
Can I avoid someone being able to scrape my website at all, or is everything just up for grabs?
the best answer and the easy one is called GOOGLE - WEBMASTER TOOLS!
HERE
actually doing that is very hard and you would need to crawl the web to discover those links that address to your pages... dynamic content as well is linked so it would be find by google as well.
this tool will allow you to see outer links that address to your site.. and you can check them.
for extra - you can monitor requests and traffic to your site and find ip's that are using the same page over and over again. that can tell u that an outer page is dynamically loading content from your web page.
EDIT:
here is a good article in this subject: link - scroll down and you can see the use of google
webmaster tool with some other progrmas and method.
here is a good start guide to the google webmaster: link
ENJOY!
I've got a Flex 3 project. One of the problems I have is that not very much of its content is indexed by Google. Currently, I pull data from a mySQl database, so the Googlebot doesn't see most of the site.
My goal is to increase the amount of content indexed by Google, improve the SEO, and improve SERPs.
I thought that instead of pulling the data from the database that I would change the project's architecture and create separate "pages". So, in my case, I would compile each puzzle separately and upload it to the server in its own directory. This way the info in each puzzle would get indexed.
The negative is that if I add a puzzle, I'd have to add a link to it in all of the puzzles that are already on the server. I would have to add the link, re-compile each puzzle and upload it to the server. Is there a way to get around this problem? Also, if I wanted to communicate some data from one puzzle to another in the future, I wouldn't be able to do so.
Any suggestions?
Thank you.
-Laxmidi
The usual way to achieve this goal is to develop a hidden parallel site in HTML.
On the first page you will have your flash and, hidden by javascript, a list of links to the other pages. These links will be parsed by the robots. Ideally, the href pages are virtual (look for "url rewriting"). On each "fake" page, your server-side language will print on the page a content or links from your database AND the flash. The flash will be provided with a string explaining where it is and what it's supposed to show.
Ex: http://www.mysite.com/category1/content7 The URL rewriting sends this request to http://www.mysite.com/index.php?uri=category1/content7. The page should display the Flash with FlashVar "uri=category1/content7". The Flash knows which content it has to display so when an user comes from google, following this link, he will find the content he was looking for.
Every linking and content for SEO should be in HTML, don't trust robots capability of reading Flash.
have a look at Adobe's reference on deep-linking.
you can generate a website's sitemap.xml with a cron process (daily), such that the URLs encode the state of the application you need. This URL will encode whatever content you need to retrieve from the db, with just one index.html page.
good luck!
The task is relatively straightforward:
A Drupal website displays a list of articles with thumbnails. Some visitors would like to view it without images by clicking on a button/link and have that preference saved.
e.g. http://patterntap.com/collections/index/
The problem is all visitors are anonymous and given certain traffic, page cache is enabled.
My idea was to use some simple JavaScript to set a cookie, refresh the page and depending on the cookie values (or its presence/absence) display or hide the images.
Except Drupal serves cached pages quite early and the only quick way to modify the cached version that I could find is by hacking includes/bootstrap.inc and add a custom class to the body classes then hide the images with css.
A very wrong approach, I know. But I wonder if there is a way to save different versions of a page and serve the correct version?
Edit:
need to keep the same uri
the js to show/hide the images without reload and set the cookie is already in place
hook_boot() is not really called for cached pages, so can't do it via custom module
.htaccess mods?
Edit/solution:
In the end went with Rimian's suggestion. But it is possible to accomplish the task using our own cache.inc implementation as seen in the Mobile Tools module. Specifically, by extending cache.inc and updating settings.php to include
$conf['page_cache_fastpath'] = FALSE;
$conf['cache_inc'] = 'path/to/my/module/my_module_cache.inc';
So let me get this right. You wanna hide some images on a cached page if the user chooses to?
Why don't you write some jQuery or javascript and load that into your cached page with all the rest of the document?
Then, the client/browser would decide to run your script and hide images depending on some parameters you passed along with the request to that page or in the cookie? The script gets cached and only runs when you call it.
If you were hacking the bootstrap for something like that you'd really need to be rethinking what you were doing. Crazy! :)
Also take a look at cache_get and cache_set:
http://api.drupal.org/api/drupal/includes--cache.inc/6
I'm not sure I 100% understand what you are trying to do but here are my thoughts. One of your root problems is that you are trying to access what is essentially different content at the same uri.
If this is truly what you want to do, then Rimian's suggestion of checking out chache_get and chache_set may be worthwhile.
Personally, it seems cleaner to me to have your "with thumbnails" and "without thumbnails" be accessed via different uri's. Depending on exactly what you are wanting to accomplish, a GET variable my be an even better way to go. With either of these two options you would hide or show your thumbnails at the theme layer. Pages with different paths or get variables would get cached separately.
If you want the visitor to be able to switch views without a page reload, then jQuery and a cookie would probably suite your needs. This wouldn't require a page reload and switching back and forth would be quite simple.
I have an ASP.Net application which as desired feature, users would like to be able to take a screenshot. While I know this can be simulated, it would be really great to have a way to take a URL (or the current rendered page), and turn it into an image which can be stored on the server.
Is this crazy? Is there a way to do it? If so, any references?
I can tell you right now that there is no way to do it from inside the browser, nor should there be. Imagine that your page embeds GMail in an iframe. You could then steal a screenshot of the person's GMail inbox!
This could be made safe by having the browser "black out" all iframes and embeds that would violate cross-domain restrictions.
You could certainly write an extension to do this, but be aware of the security considerations outlined above.
Update: You can use a canvas utility function to get a screenshot of a page on the same origin as your code. There's even a lib to allow you to do this: http://experiments.hertzen.com/jsfeedback/
You can find other possible answers here: Using HTML5/Canvas/JavaScript to take screenshots
Browsershots has an XML-RPC interface and available source code (in Python).
I used the free assembly UrlScreenshot.dll which you can download here.
Works nicely!
There is also WebSiteScreenShot but it's not free.
You could try a browser plugin like IE7 Pro for Internet Explorer which allows you to save a screenshot of the current site to a file on disk. I'm sure there is a comparable plugin for FireFox out there as well.
If you want to do something like you described. You need to call an external process that prints the IE output as described here.
Why don't you take another approach?
If you have the need that users can view the same content over again, then it sounds like that is a business requirement for your application, and so you should be building it into your application.
Structure the URL so that when the same user (assuming you have sessions and the application shows different things to different users) visits the same URL, they always see same thing. They can then bookmark the URL locally, or you can even have an application feature that saves it in a user profile.
Part of this would mean making "clean urls", eg, site.com/view/whatever-information-needed-here.
If you are doing time-based data, where it changes as it gets older, there are probably a couple possible approaches.
If your data is not changing on a regular basis, then you could make the "current" page always, eg, site.com/view/2008-10-20 (add hour/minute/second as appropriate).
If it is refreshing, and/or updating more regularly, have the "current" page as site.com/view .. but allow specifying the exact time afterwards. In this case, you'd have to have a "link to this page" type function, which would link to the permanent URL with the full date/time. Look to google maps for inspiration here-- if you scroll across a map, you can always click "link to here" and it will provide a link that includes the GPS coordinates, objects on the map, etc. In that case it's not a very friendly url but it does work quite well. :)