A colleague set up an intranet application where the users can upload documents. These documents are displayed afterwards in an IFRAME using <IFRAME src="document.doc"></IFRAME> - of course this only works in IE. While this works with some users, others (including myself) do not see the document, but rather a download dialogue allowing them to download the document.
I vaguely remember that there was a recent security issue with displaying MS Office documents in IFRAMES, but could not find any information whether there was a security update blocking this. Anyone here who has a clue?
I am not looking for alternatives for the IFrame, I just want to know why some users are displayed the download box while other users see the inline document.
If you get a download dialogue instead of displaying the document inline in an iframe, then:
you probably haven't installed the Office Web Components. You can change the components Office has installed from its Add/Remove Programs entry in the Control Panel. But,
DON'T. There have been endless security holes in OWC. Installing a plugin means a great deal of new net-facing code and subsequently a great potential for exploitable bugs, especially in software that wasn't originally intended to be net-facing like Office.
Install the absolute minimum number of plugins you can get away with (these days usually just Flash). Don't install every plugin an application offers you, don't install a PDF plugin, and definitely don't install a load of plugins for Office documents.
Is viewing an Office document in a little annoying scrolly box tucked into a web page really compelling enough to justify the risk? I suggest that no, it's in fact much much less usable than just downloading the document to the desktop and opening it in a proper document editor/viewer.
You might consider an RTE like CKEditor. It allows the user to cut-and-paste from Word (I assume you are primarily concerned with Word docs given your problem description) and then to view and edit. CKEditor claims to be "compatible with all major browsers."
Related
I've learnt about wget and have downloaded a few directories from the web. However, I've hit some roadblocks.
I'm trying to download from a site which requires a password and username, which I have access to.
There are no apparent directories that I could find from inspecting elements. The site was loading up the documents in a reader.js (whatever that is) and it seemed that each page was being fetched as I clicked the arrow button instead of the whole document.
Any ideas would be helpful :)
The site was loading up the documents in a reader.js (whatever that
is) and it seemed that each page was being fetched as I clicked the
arrow button instead of the whole document.
This suggest that JavaScript is used to get document, wget is not right tool in such situation. You would need another tool, namely one which provides browser automation, I suggest giving try PhantomJS (if you are tolerating writing in JavaScript-like language) or selenium (if you are tolerating writing in at least one of officially supported languages).
I am trying to restrict the user from downloading the page as .html or .aspx file from browser.
Or is there a way to change the content of file if its downloaded?
This is a complex area, with lots of moving parts. The short answer is "there is no way to do this with 100% success; there are a few things you can do which make it harder".
Firstly, you can include JavaScript to disable the right click context menu. This doesn't stop Ctrl+S, but might discourage casual attempts.
Secondly, you can use DRM in the browser (though this is primarily aimed at protecting media content. As browser support is all over the show, this isn't realistic right now.
Thirdly, you could write your site as a single page web application, and build some degree of authentication into the "retrieve content" logic. This way, saving the page to disk wouldn't bring the content along, just the "page furniture". However, any mechanism you include to only download content when you think you should is likely to be easily subverted by anyone who is moderately motivated.
Also, any steps you take to stop people persisting your pages locally are likely to break the caching mechanisms on which the internet depends for performance, so your site would likely be dramatically slower.
No you can't stop them.
Consider how the web actually works here: once the user has visited your website and loaded your page into their browser, they have already downloaded it - the web page was transmitted from your server to their computer and appeared on their screen.
All they have to do then is click the Save button to keep it permanently on their disk. That doesn't involve downloading it again, it just copies the page data from a temporary folder to a permanent one. Of course it's also possible for people to use another HTTP client (i.e. not a browser, but maybe an existing program, or some code they wrote themselves) to visit the URL of your page and save the returned contents.
It's not clear what problem think you would solve by stopping people from saving pages. Saving the page is something done within the browser - you as a site developer don't control the user's browser, so you can't prevent that. And if you stop them from downloading your page in the first place then - by definition - you also stop them from using your website...which kind of defeats the point of having one :-).
If you've got some sort of worry about security, you'll have to clarify exactly what you are concerned about, and maybe you can get advice about a sensible way to deal with it.
Good day.
So, here is my issue.
I'm currently using sharepoint 2010 for web applications, I am supposed to display pdf as part of a web page. Currently, the browser tends to download the pdf file instead of displaying it.
Content-disposition is already set to inline.
I've also used iframe, and src is pointing to custom httpHandler.
I've already added "application/pdf" MIME type in the list of AllowedInlineDownloadedMimeTypes as per the advice in this link http://www.pdfshareforms.com/sharepoint-2010-and-pdf-integration-series-part-1/.
However, the application still failed to display it, and it prompts the user to download the file instead.
I'm using mozilla firefox v12 and ie8 to test the application, they both exhibit the same behavior.
What else is missing? Thank you.
It's important to remember that not all browsers, especially older ones like Internet Explorer 8, have the ability to render PDF content inline. In these older browsers, this was generally accomplished through plug-ins like Adobe Reader or Foxit being installed on the client machine.
Basically, if you are using an older browser, your users will likely need one of these (or a similar) plug-in installed. Otherwise when the browser encounters a PDF file, it will serve it to the user, as it doesn't really know how to deal with it.
There is also a chance that this could be a permissions / settings issue similar to the one addressed in this related question. You may want to review over some of the discussions within that thread as well as this Sharepoint 2010 one, which details a a setting called "Browser File Handling" and how it's default value of "strict" can affect how PDFs and other files are accessed.
He came across the solution while looking at the "Web Application General Settings". There is a setting called Browser File Handling and by default it is set to strict.
In %TRIDION_HOME%\web\WebUI\WebRoot\Configuration\System.config we can increment the modification attribute's value to instruct the Content Manager to force a download of items.
The setting is mentioned on the PowerTools discussion but also on the Skinning the Content Manager Explorer topic on SDL Live Content.
<server version="6.1.0.55920" modification="7">
Alternatives to updating the CME include clearing browser cache (CTRL+Shift+Delete in Chrome) or setting cache settings per user.
Question
Should I use this for any CM-side changes such as GUI extensions, schema changes, or template linked schemas? Or does it only apply to certain parts of the Content Manager Explorer?
In other words, after a schema and template change, what's the best way to make users get the latest versions of components, schema drop-downs, and template selections?
The values of the modification and version attributes become part of the URL of every CSS and JavaScript file that the Tridion UI generates/merges and of many of the static (image) files too. So the URLs look like this edit_v.6.1.0.55920.7.aspx?mode=css. Since the browser sees this as a new URL, there is no way it can have the file in its cache yet. And thus it will always have to download the files from the server, instead of using (possibly outdated) files from the local cache.
This is a technique of injecting some version information into the URL is known as "URL fingerprinting". Google commonly embeds a hash-value of the file into the URL, ensuring that the fingerprinting happens without requiring the developers to increase a version number manually. But whichever way of fingerprinting is used, the technique is a pretty efficient way to ensure that all browsers download the latest version of your code.
If you are developing a GUI extension, you can indeed typically get the same effect by clearing your browser cache or even disabling it completely (for the Tridion domain). But once you roll out your extension to a non-development server, changing the modification attribute is the most certain way to ensure that all your users get the latest JavaScript/CSS changes without each of them having to clear their cache manually.
The URL fingerprinting in Tridion only affects CSS, JavaScript and image files. The actual CMS data (such as Schemas and Components) is loaded using XMLHttpRequests and thus not affected by the modification attribute.
As far as I know,
<server version="6.1.0.55920" modification="7">
This clears only JS and CSS related caching. When a User access the CM then CM loads all the files including latest copies.
Should I use this for any CM-side changes such as GUI extensions, schema changes, or template linked schemas? Or does it only apply to certain parts of the Content Manager Explorer?
For this line, answer is No. Since when ever user does any changes to schema, changes should refresh on all publications. Currently this is not happening on the browser.
Hopefully this might be fixed in on coming versions.
In other words, after a schema and template change, what's the best way to make users get the latest versions of components, schema drop-downs, and template selections?
Currently user should do a forceful refresh to get updated info on all publications.
The SDL Tridion CMS interface caches CMS Items in order to provide faster browsing and loading of its own interface. This does mean that sometimes:
Custom GUI extensions may not display latest versions of the files
Recently created or modified CMS items may not be shown, or show the latest version.
This is why sometimes a new keyword isn't shown within a component field, or a new component template isn't shown when trying to add a component page.
Incrementing the modification number in the node will cause all CMS items to show the latest versions to the CMS user(s). You'll see if uses this value to reference CSS and JS files used by the CMS GUI.
As a developer I also turn off my Firefox cache (I prefer firefox for the firebug extension which is great for working with GUI extensions) as this means you don't need to go and change this value, a simple browser refresh seems to always do the trick. Turning off cache is explained here : https://superuser.com/questions/23134/how-to-turn-off-the-firefox-cache
So one of the many many tasks I'm faced with daily as a developer is trying to get our support department to get as much information about the end users environment as possible.
Browser version, current cookies, plugins, etc etc and it would be handy to point people to a specific page on our site and say "copy paste this to support".
In the past I've always written these by hand, and used third party tools (such as BrowserHawk) to get as much info as possible.
How does everyone else deal with getting this information from end users, is there a nice package I'm unaware of to give a detailed dump a users env without having to get the users to run an app?
Just to clarify I'm not looking at an elmah style reporting (which is very helpful as well!) but this mainly for the client side stuff.
Some months ago I have see the googles ads page have a cool nice report button. What this button do is that capture using javacript the page as it is and send you the report, with all the details, and an image of the actually page.
So I have found this library http://html2canvas.hertzen.com/ that make the same think.
And here are some example pages with this feedback.
http://hertzen.com/experiments/jsfeedback/
So I add this feedback option, and I ask from the users to point out the issue, and send the feedback, so for pages I have a very nice image for what is not going well.
The next think is that I log and check all errors, and I fix them soon.