I have the following acceptance criteria for creating a pdf file from my asp.net page which contains nested RadGrid controls:
The current view of the page should be converted to PDF which means that the viewstate and session information of the current page request should be taken into account. This leaves me with only one option; make the PDF conversion at Page_Render() event handler of the current session when a new pdf postback is sent.
The asp.net page layout is changed using JQuery at the time of the $(document).ready(...) that means that not only the rendered HTML should be converted to PDF but also the javascripts have to run on it to have the desired layout changes in the final PDF file; e.g. column alignments, etc. I hope it would be possible otherwise ...
The asp.net page only appears correctly in IE 6+ therefore the PDF tool which is used must use IE rendering engine.
Please could you advise which tool can help in such scenario?
I downloaded and tested EvoPdf tool but it doesn't support IE rendering engine apparently (only FireFox rendering) and couldn't make the javascripts enabling work correctly with it.
I'm going to evaluate ABCPdf and Winnovetive but I'm not sure they would support what I want.
If I could find no tool to help with the above, another possible solution might be just taking a screenshot of the page using client script (don't know whether it'd be possible), then sending it to the server and finally converting that image to pdf.
Many thanks,
You can try WebToPDF.NET.
Try to convert HTML page which you get after the asp.net page have been rendered
WebToPDF.NET suports JavaScript(and JQuery), so it's not problem
WebToPDF.NET passes all W3C tests (except BIDI) and supports HTML 4.01, JavaScript, XHTML 1.0, XHTML 1.1 and CSS 2.1 including page breaks, forms and links.
Don't know exactly about your requirements but have a look at wkhtmltopdf
How to use wkhtmltopdf.exe in ASP.net
winnovative did exactly what I needed :) it uses IE rendering engine unlike EvoPdf.
I haven't had time testing other tools.
Thanks
EvoPdf is developed by the same team who develop ExpertPDF (http://www.html-to-pdf.net/). ExpertPDF is the older product so although the APIs are almost identical, the EvoPDF API is slightly more refined.
The main difference between the products is that ExpertPDF uses the local IE rendering engine.
Winnovative HTML to PDF Converter does not use IE as rendering engine. It is compatible with WebKit rendering and does not depend on IE or any other third party tools.
You can convert the current HTML page overriding the Render() method of the ASP.NET page and capture the HTML code being rendered by page. You can find complete example with C# source code in Convert the Current HTML Page to PDF Demo.
Here is the relevant source code for this approach:
// Controls if the current HTML page will be rendered to PDF or as a normal page
bool convertToPdf = false;
protected void convertToPdfButton_Click(object sender, EventArgs e)
{
// The current ASP.NET page will be rendered to PDF when its Render method will be called by framework
convertToPdf = true;
}
protected override void Render(HtmlTextWriter writer)
{
if (convertToPdf)
{
// Get the current page HTML string by rendering into a TextWriter object
TextWriter outTextWriter = new StringWriter();
HtmlTextWriter outHtmlTextWriter = new HtmlTextWriter(outTextWriter);
base.Render(outHtmlTextWriter);
// Obtain the current page HTML string
string currentPageHtmlString = outTextWriter.ToString();
// Create a HTML to PDF converter object with default settings
HtmlToPdfConverter htmlToPdfConverter = new HtmlToPdfConverter();
// Set license key received after purchase to use the converter in licensed mode
// Leave it not set to use the converter in demo mode
htmlToPdfConverter.LicenseKey = "fvDh8eDx4fHg4P/h8eLg/+Dj/+jo6Og=";
// Use the current page URL as base URL
string baseUrl = HttpContext.Current.Request.Url.AbsoluteUri;
// Convert the current page HTML string a PDF document in a memory buffer
byte[] outPdfBuffer = htmlToPdfConverter.ConvertHtml(currentPageHtmlString, baseUrl);
// Send the PDF as response to browser
// Set response content type
Response.AddHeader("Content-Type", "application/pdf");
// Instruct the browser to open the PDF file as an attachment or inline
Response.AddHeader("Content-Disposition", String.Format("attachment; filename=Convert_Current_Page.pdf; size={0}", outPdfBuffer.Length.ToString()));
// Write the PDF document buffer to HTTP response
Response.BinaryWrite(outPdfBuffer);
// End the HTTP response and stop the current page processing
Response.End();
}
else
{
base.Render(writer);
}
}
Related
I'm working on a SharePoint 2013 site and I've added the ability to save pages in PDF. The PDF conversion is handled by the third party library SelectPdf.
I managed to get everything to work (rendering and file download), except that the "PDF Download" button that I have on my page works only 1 time. Meaning, the click event on the code behind is fired only once, no matter how many times I click the button (notice that I click it with intervals of 10+ seconds). If I want to download the PDF file again, I have to refresh the page.
I put together a "hello world" example (see below) in order to pinpoint the problem:
protected void lnkPdfDownload_Click(object sender, EventArgs e)
{
Response.Clear();
Response.ClearContent();
Response.ClearHeaders();
Response.ContentType = "application/pdf";
Response.AddHeader("content-disposition", "attachment;filename=test.pdf");
/************************************ Create PDF File ************************************/
string html = #"<!DOCTYPE html PUBLIC ""-//W3C//DTD XHTML 1.0 Strict//EN"" ""http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"">
<html dir=""ltr"" lang=""en-US"">
<body><h1>Hello World</h1></body>
</html>";
HtmlToPdf converter = new HtmlToPdf();
PdfDocument doc = converter.ConvertHtmlString(html);
byte[] bytes = doc.Save();
Response.OutputStream.Write(bytes, 0, bytes.Length); // ALTERNATIVE: doc.Save(Response.OutputStream);
/************************************ Create PDF File ************************************/
//Response.End(); // This throw a ThreadAbortException, therefore I'm using the alternative code below
Response.Flush();
Response.SuppressContent = true;
HttpContext.Current.ApplicationInstance.CompleteRequest();
}
At the beginning I thought it was Response.End() that caused the issue (by throwing the ThreadAbortException), but I replaced it with other code and I still have the same problem (no exceptions are thrown now).
I don't think the problem is in the SelectPdf library: I can comment out the entire block (between the "Create PDF File" comments), and I still get the same thing (obviously no PDF is generated).
I noticed that, at most, I can successfully click the "download" button up to 2 times (it's rare, and not consistent): the third time nothing happens.
While this isn't a huge deal, I think there is something wrong going on that I'm not seeing. Here is why: after I click the "download" button (and get my PDF file), I am not able to go on edit mode in my SharePoint page. The "loading" message keeps spinning but nothing happens (again, unless I refresh the page).
Has anyone had this problem? I looked online but I couldn't find anything about it.
I'm using Internet Explorer 11 and Chrome 51. Please let me know if you need more information. Thank you.
Are you sure there are not javascript/jquery errors happening when the download button is clicked that prevent the re-clicking of the PDF button and also going into edit mode?
Especially since refreshing the page makes everything work again.
Overview
We have an in house CMS that we've recently added multilingual support to. The CMS allows dragging/dropping of various panels (.net controls) and some panels show dynamic content entered via a rich text editor. Also, some fields are multilingual so some panel content will change according to the current language.
Ideally we want to add the language to the URL. So /contact-us becomes /en/contact-us.
Our main handler will then set the language and the all panels will show relevant copy.
Goal
So, ideally we'd like to be able to:
Process the page server side after it's been built by our main page assembler (eg in PreRender)
Parse the built page or recurse the control tree to update ALL internal links
Prepend a lanauge code to all internal links on the page. (easy enough once we know where they all are)
NB: Some links will by in .net HyperLink controls but others will be <a> tags entered via a Rich Text Editor.
Stuff I've looked at
I've skimmed google but haven't found anything that seems to match our needs:
Html Agility Pack - can be used to take a URL and parse for links. But I'm guessing this can't be used say in Pre_Render of our main page builder. Ideal for scraping I suppose.
Various JS solutions - locate links and update. Very easy but I'm wary of using JS for updating URLs client side.
All suggestions welcome :)
So, There will be dynamic content and static content. And the CMS users should be able to edit both of them. You should have a Language DB table, and for instance; For "about us" page, There should be about-us EN, about-us DE, about-us FR rows in other table.
And you should have another table for static content. For instance for contacy us form. There are static texts on contact forms. Name, e-mail,message etc.
This can be done by overriding Page.Render() as follows:
protected override void Render(HtmlTextWriter htmlWriter)
{
StringBuilder ThisSB = new StringBuilder();
StringWriter ThisSW = new StringWriter(ThisSB);
HtmlTextWriter RenderedPage = new HtmlTextWriter(ThisSW);
// pass our writer to base.Render to generate page output
base.Render(RenderedPage);
// get rendered page as a string
string PageResult = ThisSB.ToString();
// modify the page
string ModifiedPage = UpdatePage(PageResult);
// write modified page to client
htmlWriter.Write(ModifiedPage);
}
The method UpdatePage can manipulate the page as a string in any way you wish - in our case we use find and update all links and local file paths.
I'm writing a WinJS app that takes an HTML fragment the user has copied to the clipboard, replaces their
Later, when I go to display the .html, I create an iFrame element (using jQuery $(''), and attempt to source the .html into it, and get the following error
0x800c001c - JavaScript runtime error: Unable to add dynamic content. A script attempted to inject dynamic content, or elements previously modified dynamically, that might be unsafe. For example, using the innerHTML property to add script or malformed HTML will generate this exception. Use the toStaticHTML method to filter dynamic content, or explicitly create elements and attributes with a method such as createElement. For more information, see http://go.microsoft.com/fwlink/?LinkID=247104.
I don't get the exception if I don't base64 encoded the images, i.e. leave them intact and can display iframes on the page with the page showing images.
If I take the html after subbing the urls for base64 and run it through toStaticHTML, it removes the src= attribute completely from the tags.
I know the .html with the encoded pngs is right b/c I can open it in Chrome and it displays fine.
My question is I'm trying to figure out why it strips the src= attributes from the tags and how to fix it, for instance, creating the iframe without using jquery and some MS voodoo, or a different technique to sanitize the HTML?
So, a solution I discovered (not 100% convinced it the best and am still looking for something a little less M$ specific) is the MS Webview
http://msdn.microsoft.com/en-us/library/windows/apps/bg182879.aspx#WebView
I use some code like below (where content is the html string with base64 encoded images)
var loadHtmlSuccess = function (content) {
var webview = document.createElement("x-ms-webview");
webview.navigateToString(content);
assetItem.append(webview);
}
I believe you want to use execUnsafeLocalFunction. For example:
var target = document.getElementById('targetDIV');
MSApp.execUnsafeLocalFunction(function () {
target.innerHTML = content}
);
I am currently writing a ContentManager in ASP.NET. I have a preview button which uses jQuery to post the form data to new window and shows how a page would look without saving it to the database and effecting the live site. Although its been somewhat of a hassle to get ASP.NET to post directly to the page I am trying to preview, I've finally worked it all out using a series of jQuery code. It worked beautifully, I loaded all the post values into the page using Request.Form and displayed them on the page. Unfortunately for some reason the Telerik RadEditor's I was using were posting me the values they had been assigned on the C# Page_Load event and did not reflect the text changes I made. If anyone could help me out that would be great.
function showPreview()
{
url = "<%= (SiteManager.GetSite()).Url + this.Filename %>?preview=true";
var specs = "width=1010,height=700,location=0,resizeable=1,status=1,scrollbars=1";
window.open(url, 'PagePreview', specs).moveTo(25, 25);
$("#__VIEWSTATE").remove();
$("#__EVENTTARGET").remove();
$("#__EVENTARGUMENT").remove();
$("#aspnetForm").removeAttr("action");
$("#aspnetForm").attr("target","PagePreview");
$("#aspnetForm").attr("action", url);
$("#aspnetForm").submit();
}
Here is all the post data I am receiving from the tererik RADEDITOR ::
[ctl00_MainContentPlaceHolder_SideContentRadEditor_dialogOpener_Window_ClientState] => [ctl00_MainContentPlaceHolder_SideContentRadEditor_dialogOpener_ClientState] => [ctl00$MainContentPlaceHolder$SideContentRadEditor] => [ctl00_MainContentPlaceHolder_SideContentRadEditor_ClientState] => [ctl00_MainContentPlaceHolder_ContentRadEditor_dialogOpener_Window_ClientState] => [ctl00_MainContentPlaceHolder_ContentRadEditor_dialogOpener_ClientState] => [ctl00$MainContentPlaceHolder$ContentRadEditor] => %3cp%3eTestPageContent%3c/p%3e
This is the html value of the text editor (shown above) %3cp%3eTestPageContent%3c/p%3e
This is the value in the RadEditor that was loaded during the Page_Load event.
I changed the value to "Test". But it was not sent over the POST Request, it sent what was loaded in the page load.
The editor content area is separate from the textarea used to submit the content during a POST request. The editor will automatically try to save the content in the hidden textarea when the form is submitted, but in your case no event is fired because it happens programmatically (i.e. you call .submit()). You will need to tell the editor to save its content manually before you do the postback. The code is pretty basic - get a reference to the editor and call .saveContent():
//Grab a reference to the editor
var editor = $find("<%=theEditor.ClientID%>");
//Store the content in the hidden textarea so it can be posted to the server
editor.saveContent();
One solution would be to grab the current HTML in the editor in your showPreview method and pass that manually. To do that, add a hidden input element in your page to hold the HTML content:
<input type="hidden" id="htmlContent" name="htmlContent" />
Then, you can set that intput's value in showPreview like this:
function showPreview()
{
url = "<%= (SiteManager.GetSite()).Url + this.Filename %>?preview=true";
var specs = "width=1010,height=700,location=0,resizeable=1,status=1,scrollbars=1";
window.open(url, 'PagePreview', specs).moveTo(25, 25);
$("#__VIEWSTATE").remove();
$("#__EVENTTARGET").remove();
$("#__EVENTARGUMENT").remove();
// *** Begin New Code ***
//Grab a reference to the editor
var editor = $find("<%=theEditor.ClientID%>");
//Get the current HTML content
var html = editor.get_html()
//Put that HTML into this input so it will get posted
$("#htmlContent").val(html);
// *** End New Code ***
$("#aspnetForm").removeAttr("action");
$("#aspnetForm").attr("target","PagePreview");
$("#aspnetForm").attr("action", url);
$("#aspnetForm").submit();
}
Then when you want to get the HTML during the postback you can just use Request.Form["htmlContent"]
One caveat: Since you'll be posting raw HTML, ASP.NET's Request Validation might cause problems. One of the major purposes of that validation is to make sure that HTML content doesn't get posted back to the server - the very thing you're trying to accomplish. You could of course turn the validation off (see the link above) but the validation is there for a reason. Another solution might be to do some basic encoding of the HTML before you post it. If you just replace all less-than symbol (<) with something before posting it, ASP.Net will be happy. Then you just need to 'un-replace' it during the postback.
I'm building a little action to take an encrypted PDF file path, decrypt it, and deliver the resulting PDF to the browser.
My code works 100% of the time in Chrome and Firefox, but it works only 50% of the time in IE9.
When I follow the link in IE9, it looks like it opens the Adobe Reader plugin in the browser window, but no file is displayed until I hit refresh.
Here is my code:
[CheckSubscriber]
public ActionResult file(string path)
{
string mappedPath = Server.MapPath(
EncryptDecrypt.Decrypt(path,
EncString));
return base.File(mappedPath, "application/pdf");
}
How would I get this to work consistently in IE9?
I'm just thinking out loud here but maybe I am using the wrong mime-type?
You should be explicitly setting
Content-Disposition: inline; filename="foo.pdf"
The content-disposition is a crucial response header when returning a response from the server. All browsers will correctly detect the file 100% of the time if this is specified along with the MIME type.
You can use Fiddler to ensure that the response headers are in order.
Edit
You cannot use the "ActionResult" return type for your action to do this.
You need to use "FilePathResult" or "FileStreamResult" both of which can be found in the System.Web.MVC namespace.
Alternatively you can create a Custom Action Return Type and use that for this action.
The article I have provided gives step by step along with code as to how to go about doing this.
I would use Fiddler to see the difference between the request/response that the browsers send/receive and see if you can spot it from there.
Here's how I return an Excel file (pdf should be the same):
public FileResult DownloadErrors(string filename)
{
var file = System.IO.File.ReadAllText(filename);
return File(new System.Text.UTF8Encoding().GetBytes(file), "application/ms-excel", "Errors.csv");
}
Be sure to use FileResult instead of ActionResult.