I want to generate a QR code that contains URL & Contact detail both. But real problem is that i want to perform both operation to add contact details in our contact list & open a URL link in web browser with any of QR code reader app. Is it possible with any other way.
Please suggest any solution.
Thanks in advance
you can think of qr codes like text files. only the programm which parses the file decides what to do.
maybe some scanners will recognise the URL and VCard seperate but you can't really count on it.
I found a website which is relevant to the topic that I am researching - getting an IFrame's current URL address from another domain.
Here it is:
http://hidemyipaddress.org/
(to use it simply go to the bottom, enter a website address and click "go").
You can surf any website through their website - and the amazing thing is that they can keep track of your current location, and even show it to you. (Here is a picture to illustrate: http://img199.imageshack.us/img199/6343/image2eb.jpg)
The reason I am asking is because I am trying to do the same thing.
How is this possible, isn't that XSS or something?
Thanks for taking your time on this.
First off, I saw similar posts already, but they weren't exactly what I am asking.
I used the Facebook Dev to create a like button for my website, stuck the code in and the the button showed up. The only issue is that it likes the wrong url when I click the button.
I'm pretty sure the issue is that I have it set to redirect automatically from mydomain.com to the most recent post. I think this is gumming up the works with the like button and causing it to like mydomain.com/mostrecentpost instead of simply liking mydomain.com.
Is there a way to correct this issue without having to get rid of the redirect (because that isn't an option)?
Sorry if that was a little wordy, wanted to make sure I explained the issue fully.
Is there a way to correct this issue without having to get rid of the redirect (because that isn't an option)?
Either don’t redirect in those cases where the user agent header of the request points to it being Facebook’s scraper;
or set the canonical URL of http://example.com/mostrecentpost to just http://example.com/ using the appropriate Open Graph meta tag. (Although that would mean you would not be able to like a single post any more, because all of your posts would just point to your base domain as the “real” website; so the first is probably the better option.)
I want to make a site where there user can basically navigate the web from within an iframe. The catch is that I'd like to be able to have more control over what is rendered within the iframe. Specifically,
I'd like to be able to filter out images or text, disable forms etc.
I'd also like to be able to gather feedback such as what links the users clicked on.
Question 1:
Is this even possible using a standard back-end scripting language (like php), with html and javascript on the frontend?
Question 2:
Would I first need to grab the source of the site before it is rendered, then do whatever manipulation is necessary, and finally re-render it somehow?
Question 3:
Could somebody please explain the programming flow that would occur here (assuming its possible)?
I think you would probably want to grab the source of the of site (with server-side code) before rendering it. You might run into cross-site scripting issues if you try to use JavaScript. Your iframe would load a page like render.php and pass the address of the page to render os a querystring parameter. Then use regular expressions to find elements in the HTML that render.php downloads from the address. Rewrite the HTML as necessary and then write it all out to the iframe.
Rewrite links so that that the user is taken to a page you control and redirected onto a target site if you want to track where people are going. Example: a link in the page needs to go to google.com. You would send them to tracker.php?target=http://google.com. You control tracker.php and can log each load of this page and then redirect the user to the target site.
Update:
Another possible solution is to use Apache or other server to proxy the target website. There are modules like mod_proxy for this. There may also be modules that let you parse the HTML or you could roll your own.
I should point out that even the best solutions offered to your question will be somewhat brittle if you do not have full control over the target site. You will want to have lots of error handling or alerting.
You can have a look at this. It uses iFrame really well, and maybe even use the library it has.
I have an ASP.Net application which as desired feature, users would like to be able to take a screenshot. While I know this can be simulated, it would be really great to have a way to take a URL (or the current rendered page), and turn it into an image which can be stored on the server.
Is this crazy? Is there a way to do it? If so, any references?
I can tell you right now that there is no way to do it from inside the browser, nor should there be. Imagine that your page embeds GMail in an iframe. You could then steal a screenshot of the person's GMail inbox!
This could be made safe by having the browser "black out" all iframes and embeds that would violate cross-domain restrictions.
You could certainly write an extension to do this, but be aware of the security considerations outlined above.
Update: You can use a canvas utility function to get a screenshot of a page on the same origin as your code. There's even a lib to allow you to do this: http://experiments.hertzen.com/jsfeedback/
You can find other possible answers here: Using HTML5/Canvas/JavaScript to take screenshots
Browsershots has an XML-RPC interface and available source code (in Python).
I used the free assembly UrlScreenshot.dll which you can download here.
Works nicely!
There is also WebSiteScreenShot but it's not free.
You could try a browser plugin like IE7 Pro for Internet Explorer which allows you to save a screenshot of the current site to a file on disk. I'm sure there is a comparable plugin for FireFox out there as well.
If you want to do something like you described. You need to call an external process that prints the IE output as described here.
Why don't you take another approach?
If you have the need that users can view the same content over again, then it sounds like that is a business requirement for your application, and so you should be building it into your application.
Structure the URL so that when the same user (assuming you have sessions and the application shows different things to different users) visits the same URL, they always see same thing. They can then bookmark the URL locally, or you can even have an application feature that saves it in a user profile.
Part of this would mean making "clean urls", eg, site.com/view/whatever-information-needed-here.
If you are doing time-based data, where it changes as it gets older, there are probably a couple possible approaches.
If your data is not changing on a regular basis, then you could make the "current" page always, eg, site.com/view/2008-10-20 (add hour/minute/second as appropriate).
If it is refreshing, and/or updating more regularly, have the "current" page as site.com/view .. but allow specifying the exact time afterwards. In this case, you'd have to have a "link to this page" type function, which would link to the permanent URL with the full date/time. Look to google maps for inspiration here-- if you scroll across a map, you can always click "link to here" and it will provide a link that includes the GPS coordinates, objects on the map, etc. In that case it's not a very friendly url but it does work quite well. :)