I know Facebook open graph self hosted object uRLs should have their meta tags describing the object.
But I was wandering if they are supposed to have anything else at all?
i.e. Should they also provide user content? or are they just used for FB scraping?
It depends on what you are trying to do.Take a look at facebook Sharing Best Practices documents.
Related
I would like to use firebase dynamic links for sharing posts in my app via a link.
I found out that the firebase admin SDK can't generate dynamic links. So that leaves a question: should each user generate a dynamic link each time they want to share a post via link?
If there are 10k posts, a lot of links will be generated.
Having a link like https://myApp.app.link/postID would be more efficient because everybody could use the same URL to share.
Is this possible or should each user generate a personal share link?
Thanks in advance!
There is a REST API that you can use to generate Dynamic Links too. It sounds like you're looking to use that.
If the Admin SDK were to add support for Dynamic Links, it will likely be a fairly thin wrapper around that REST API.
I've used services like 'Add This' for a while but now I need to add a couple of specific bits of functionality to an ecommerce order completion page. It's to work like Amazon's order thank you page where it allows you to post a message to Facebook saying something like 'I just bought a widget on Amazon'.
Equally I'm looking for the equivalent in Twitter.
I've added a bunch of OG tags and share buttons but can't get it to do what I need. From further reading it sounds like I might need to create a Facebook app of some sort and use FB ui to create the link to post to the user's wall. I was hoping to do this without getting tangled up in that level of permissions etc but maybe that's not possible any more?
This is being developed on asp.net C#, in case there's a library that I haven't found in my searching.
Can anyone familiar with this type of development point me in the right direction?
For Twitter, the simplest way is to use Web Intents.
For example, if you want to share the text
I love http://example.com
URL encode the text to I%20love%20http%3A%2F%2Fexample.com and use the Twitter Web Intent URI. E.g.
https://twitter.com/intent/tweet?text=I%20love%20http%3A%2F%2Fexample.com
When the user clicks on that link (try it!) or is directed there by your service, they'll be prompted to share that text.
Using Linkedin via Buffer, I can share content with only an attached image. ie no accompanying comment, url, or any text. It is also possible via the web interface.
However, when using their share API, you are forced to include a URL and it displays with the content. Buffer must be using the API in some way, so how do you get around this?
I have been looking for the same solution.
How to get large LinkedIn Image Share Format
basically you just pass the image url in submitted-url field and don't pass submitted-image-url paramter in the json.
I was unable to find a this solution in google and in stackoverflow. I hope this help someone in the future.
Is it possible to track if someone links to data on my site? Specifically if my data is used in a site dynamically generated by a developer program? I would like to know if someone is blatantly passing off my site's data as their own. There are obviously ways around directly linking to content, such as content manipulation or even manual manipulation. But if someone where to link(or directly add word for word or manipulate) my content into their website, is there a way to track it?
Can I avoid someone being able to scrape my website at all, or is everything just up for grabs?
the best answer and the easy one is called GOOGLE - WEBMASTER TOOLS!
HERE
actually doing that is very hard and you would need to crawl the web to discover those links that address to your pages... dynamic content as well is linked so it would be find by google as well.
this tool will allow you to see outer links that address to your site.. and you can check them.
for extra - you can monitor requests and traffic to your site and find ip's that are using the same page over and over again. that can tell u that an outer page is dynamically loading content from your web page.
EDIT:
here is a good article in this subject: link - scroll down and you can see the use of google
webmaster tool with some other progrmas and method.
here is a good start guide to the google webmaster: link
ENJOY!
Dear all,I am now using a webtool
http://fiddesktop.cs.northwestern.edu/mmp/scrape?url=
to parse a webpage.
For example,we can parse newyorktimes homepage,we do:
http://fiddesktop.cs.northwestern.edu/mmp/scrape?url=http://www.nytimes.com/pages/world/index.html
in the address bar of our browser,it will parse things nicely for us.
However,it just fails for google pages.
For example,if I want to parse Google news headpage,like:
http://fiddesktop.cs.northwestern.edu/mmp/scrape?url=http://news.google.com/nwshp?hl=en&tab=wn
I will always get 500 Internal Server Error.
I am sure that is somthing to do with google website,I think probably we need some API for google,does anyone have any idea how to to sort this out for google pages?
Many thanks.
Per the google.com robots.txt file, you are explictly requested not to scrape their content. Google does not provide an API for machine-readable search results; they want to control the presentation of their content via widgets and embedding strategies.