we run into problems using the product. It seems does some functions in iText 5.4.3 doesn’t working well? Please, can someone give us a hint how to solve that?
We modify src.pdf to dest.pdf as follows:
Function CreateFlattedPdf(ByRef originalPdf As Byte()) As Byte()
Dim retValue As Byte() = Nothing
Dim originalPdfReader As PdfReader = New PdfReader(originalPdf)
Dim pdfKopie As MemoryStream = New MemoryStream()
Dim pdfKopieStamper As PdfStamper = New PdfStamper(originalPdfReader, pdfKopie)
pdfKopieStamper.SetEncryption(False, userPassword, ownerPassword, _
PdfWriter.ALLOW_ASSEMBLY _
Or PdfWriter.ALLOW_COPY _
Or PdfWriter.ALLOW_DEGRADED_PRINTING _
Or PdfWriter.ALLOW_FILL_IN _
Or PdfWriter.ALLOW_MODIFY_ANNOTATIONS _
Or PdfWriter.ALLOW_MODIFY_CONTENTS _
Or PdfWriter.ALLOW_PRINTING _
Or PdfWriter.ALLOW_SCREENREADERS _
)
' Entferne die Signaturinformationen aus dem original Pdf-Dokument
pdfKopieStamper.FormFlattening = True
pdfKopieStamper.Close()
' Schreibe den Inhalt der geflatteten Kopie in die Antwort
retValue = pdfKopie.ToArray()
' Schließe die Bearbeitung des Dokumentes ab
pdfKopie.Close()
originalPdfReader.Close()
Return retValue
End Function
Besides, we put all possible i text rights. As a result we get a PDF where page withdrawal (page-extraction) and document arrangement (document compilation) is not allowed?!
My questions are:
is this a misbehavior of iText, Or, can one change this setting with iText in generally? If so, how (code example)?
Can one the set these rights without any password too? Till present we have seen only functions for the setting rights always in conjunction with users and owners password.
Thanks for your help in advance!
Ingo
The observations
The permission tab as seen by the OP indeed shows some missing permissions:
Inspecting the permissions of the OP's result file using Adobe Acrobat, though, there is a different result:
Merely flattening the OP's source file (not encrypting it at all!) one gets this file for which Adobe Acrobat even shows these permissions:
The explanation
This is a behavior of Adobe Reader, the PDF viewer most likely used by the OP: The permissions tab seen by the OP does not only represent what has or has not been forbidden during encryption but also restrictions of the PDF viewer itself.
There seems to be the misconception by the OP that using encryption and setting permission bits one can add to the capabilities compared to un-encrypted files. Actually it is the other way around: Encryption allows you to remove permissions compared to what is allowed for an un-encrypted document. By means of not using certain ALLOW_* permission bits you withdraw permissions. You get the maximum number of permissions by simply not encrypting the document.
In addition to the permissions of the un-encrypted document a specific PDF viewing program might require additional usage rights which are viewer-specific. Such PDF viewers (foremost Acrobat Reader) generally are fairly inexpensive or free but they do not offer the full functionality unless the document in question carries the appropriate usage rights.
Usage rights can be added by means of usage rights signatures. To apply such usage rights signatures you usually need software or services provided by the manufacturer of the PDF viewer in question.
To add usage rights for Adobe Reader, e.g., you can use Adobe Acrobat or certain Adobe Lifecycle services.
Thus,
The answers
As a result we get a PDF where page withdrawal (page-extraction) and document arrangement (document compilation) is not allowed
No. As can be seen above, your dest.pdf only disallows page extraction, and as soon as you stop encrypting, even that is allowed.
1 is this a misbehavior of iText, Or, can one change this setting with iText in generally? If so, how (code example)?
It is no misbehavior of iText, it is a behavior of Adobe Reader. Adobe Reader limits its features in general and only lifts the limitations for documents with usage rights. Such usage rights can only be applied by Adobe software.
2 Can one the set these rights without any password too? Till present we have seen only functions for the setting rights always in conjunction with users and owners password.
Using encryption actually is counter-productive as it can only be used to remove permissions, not to add them.
Resources
Additional information on the issue taken from the parallel post on the itext-questions mailing list:
a sample source file illustrating the issue src.pdf
the corresponding result file generated by the OP's code dest.pdf
a screen shot showing the OP's PDF viewer's permission tab for dest.pdf:
Related
EDIT: modified Title to be more specific
I've created a generic handler in VS2012 using their basic template as a starting point and modified it to grab a pdf from our sqlserver. The primary code block is this:
buffer = DirectCast(rsp.ScalarValue, Byte())
context.Response.ContentType = "application/pdf"
context.Response.OutputStream.Write(buffer, 0, buffer.Length)
context.Response.Flush()
And this works fine to display the BLOB as a pdf using whichever pdf plugin is installed on any given browser.
My Question: How can I modify the handler to write Adobe PDF specific parameters to the output? Specifically I'm trying to set width='fit' such that the output PDF stream will autofit the document to the width of the popup window.
NB: Writing the BLOB to a pdf file and serving the PDF is not an option.
Thanks in advance for any advice or links
I don't think there's anything that you can do in your handler. According to that document PDF viewers can examine the URL that was used to open the PDF but there are no HTTP headers that you can set. So you'll need to modify the thing that links to your handler to have those parameters in place. Alternatively, you could build a pre-handler that HTTP redirects to your new handler with those parameters in place.
Also, that document was written in 2007 and was intended for Adobe Acrobat and Adobe Reader. Most modern browsers ship with their own internal PDF viewer these days so unless you are only targeting Adobe your efforts might be wasted.
I need to create and read a user preferences XML file with Adobe Air. It will contain around 30 nodes.
<id>18981</id>
<firstrun>false</firstrun>
<background>green</background>
<username>stacker</username>
...
What's a good method to do this?
Write up an "XML parser" that reads the values and is aware of the data types to convert to based on the "save preferences model." So basically you write a method/class for writing the data from the "save preferences model" to XML then write a method/class for reading from the XML into the "save preferences model", you can use describeType for both. Describe type will return an XML description of the model classes properties and the types of those properties and accessibility (read/write, readonly, write only). For all properties that are read/write you would store them into the XML output, when reading them back in you would do the same thing except you could use the type property from the describeType output to determine if you need to do a string to boolean conversion (if(boolValue == "true")) and string to number conversions, parseInt or parseFloat. You could ultimately store the XML in a local SQL database if you want to keep history, or else just store the current preferences in flat file (using FileReference, or in AIR you can use FileStream to write directly to a location).
Edit:
Agree with Joshua's comment below local shared objects was the first thing I thought of when seeing this, you can eliminate the need to write the XML parser/reader since it will handle serializing/de-serializing the objects for you (but manually looking at the LSO is probably ugly)... anyhow I had done something similar for another project of mine, I tried stripping out the relevant code, to note in my example here I didn't use describe type but the general concept is the same:
http://shaunhusain.com/OnePageSaverLoader/index.php
i use such code
string.Format("<img src='{0}'><br>", u.Avatar);
u.Avatar-it's like '/img/path/pic.jpg'
but in this site i can upload new image instead old pic.jpg. so picture new, but name is old. and browser show OLD picture (cache). if i put random number like /img/path/pic.jpg?123 then works fine, but i need it only ufter upload, not always. how can i solve this?
string imgUrl = _
string.Format("<img src='{0}?{1}'><br>", _
u.Avatar, _
FunctionThatLookupFileSystemForItsLastModified(u.Avatar).Ticks.ToString());
Instead of linking to the images directly, consider setting up a generic HTTP handler to serve the images.
MSDN: HTTP Handlers and HTTP Modules Overview
Stack Overflow: How to use output caching on .ashx handler
Append DateTime.Now.Ticks to the image url:
string imgUrl =
string.Format("<img src='{0}?{1}'><br>", u.Avatar,DateTime.Now.Ticks);
EDIT: I don' think this best practice are even a practice I would use. This is just a suggestion given the limited information given in case the Random implementation isn't truly Random.
Read your post again,,, sorry for general answer.
To workaround it do following
On Application_Start create a Dictionary with uploaded images save it on Application object, set it to null. Once you upload an image add it to this Dictionary. Wrap every place avatars appear on your website with function that evaluates image in Dictionary if found return imagename.jpg?randomnumber and then delete it from a Dictionary else return just an imagename.jpg.
This is going to be heavy because you will need to check each image in Dictionary but this will do exactly what you need.
You can set cache dependancy using the System.Web.Caching.CacheDependency namespace.
This can set the dependancy on the file uploaded, and will release the cache for that file automatically when the file changes.
There are lots of articles and stuff on MSDN and other places so I will not go into details on all that level of detail.
You can do inserts, deletes and other management of cache using the tools available.
(and this does not require you to change the file names or tack on stuff - it knows by the file system that the file changed)
How should I go about providing download functionality on an asp.net page to download a series of rows from a database table represented as a linq2sql class that only has primitive types for members (ideally into a format that can be easily read by Excel)?
E.g.
public class Customer
{
public int CustomerID;
public string FirstName;
public string LastName;
}
What I have tried so far.
Initially I created a DataTable, added all the Customer data to this table and bound it to a DataGrid, then had a download button that called DataGrid1.RenderControl to an HtmlTextWriter that was then written to the response (with content type "application/vnd.ms-excel") and that worked fine for a small number of customers.
However, now the number of rows in this table is >10,000 and is expected to reach upwards of 100,000, so it is becoming prohibitive to display all this data on the page before the user can click the download button.
So the question is, how can I provide the ability to download all this data without having to display it all on a DataGrid first?
After the user requests the download, you could write the data to a file (.CSV, Excel, XML, etc.) on the server, then send a redirect to the file URL.
I have used the following method on Matt Berseth blog for large record sets.
Export GridView to Excel
If you have issues with the request timing out try increasing the http request time in the web.config
Besides the reasonable suggestion to save the data on server first to a file in one of the answers here, I would like to also point out that there is no reason to use a DataGrid (it’s one of you questions as well). DataGrid is overkill for almost anything. You can just iterate over the records, and save them directly using HtmlTextWriter, TextWriter (or just Response.Write or similar) to the server file or to a client output stream. It seems to me like an obvious answer, so I must be missing something.
Given the number of records, you may run into a number of problems. If you write directly to the client output stream, and buffer all data on server first, it may be a strain on the server. But maybe not; it depends on the amount of memory on the serer, the actual data size and how often people will be downloading the data. This method has the advantage of not blocking a database connection for too long. Alternatively, you can write directly to the client output stream as you iterate. This may block the database connection for too long as it depends on the download speed of the client. But again; it your application is of a small or medium size (in audience) then anything is fine.
You should definitely check out the FileHelpers library. It's a freeware, excellent utility set of classes to handle just this situation - import and export of data, from text files; either delimited (like CSV), or fixed width.
It offer a gazillion of options and ways of doing things, and it's FREE, and it works really well in various projects that I'm using it in. You can export a DataSet, an array, a list of objects - whatever it is you have.
It even has import/export for Excel files, too - so you really get a bunch of choices.
Just start using FileHelpers - it'll save you so much boring typing and stuff, you won't believe it :-)
Marc
Just a word of warning, Excel has a limitation on the number of rows of data - ~65k. CSV will be fine, but if your customers are importing the file into Excel they will encounter that limitation.
Why not allow them to page through the data, perhaps sorting it before paging, and then give them a button to just get everything as a cvs file.
This seems like something that DLinq would do well, both the paging, and writing it out, as it can just fetch one row at a time, so you don't read in all 100k rows before processing them.
So, for cvs, you just need to use a different LINQ query to get all of the rows, then start to save them, separating each cell by a separator, generally a comma or tab. That could be something picked by the user, perhaps.
OK, I think you are talking too many rows to do a DataReader and then loop thru to create the cvs file. The only workable way will be to run:
SQLCMD -S MyInstance -E -d MyDB -i MySelect.sql -o MyOutput.csv -s
For how to run this from ASP.Net code see here. Then once that is done, your ASP.Net page will continue with:
string fileName = "MyOutput.csv";
string filePath = Server.MapPath("~/"+fileName);
Response.Clear();
Response.AppendHeader("content-disposition",
"attachment; filename=" + fileName);
Response.ContentType = "application/octet-stream";
Response.WriteFile(filePath);
Response.Flush();
Response.End();
This will give the user the popup to save the file. If you think more than one of these will happen at a time you will have to adjust this.
So after a bit of research, the solution I ended up trying first was to use a slightly modified version of the code sample from http://www.asp.net/learn/videos/video-449.aspx and format each row value in my DataTable for CSV using the following code to try to avoid potentially problematic text:
private static string FormatForCsv(object value)
{
var stringValue = value == null ? string.Empty : value.ToString();
if (stringValue.Contains("\"")) { stringValue = stringValue.Replace("\"", "\"\""); }
return "\"" + stringValue + "\"";
}
For anyone who is curious about the above, I'm basically surrounding each value in quotes and also escaping any existing quotes by making them double quotes. I.e.
My Dog => "My Dog"
My "Happy" Dog => "My ""Happy"" Dog"
This appears to be doing the trick for now for small numbers of records. I will try it soon with the >10,000 records and see how it goes.
Edit: This solution has worked well in production for thousands of records.
I was wondering what's the best practise for serving a generated big file in classic asp.
We have an application with "export to excel" function that produces 10MB files. The excels are created by just calling a .asp page that has the Response.ContentType set to excel and has an HTML table for the data.
This gives as problem that it takes 4 minutes before the user sees the "Save as..." dialog.
My current solution is to call an .asp page that creates the excel on the server with AJAX and lets the page return the URL of the generated document. Then I can use javascript to display the on the original page.
Is this easy to do with classic asp (creating files on server with some kind of stream) while keeping security in mind? (URL should make people be able to guess the location of other files)
How would I go about handling deleted the generated files overtime? They have to be deleted periodicly as the data changes in realtime.
Thanks.
edit: I realized now that creating the file on the server will probably also take 4 minutes...
I think you are selecting a complex route, when the solution is simple enough (Though I may be missing some requirements)
If you to generate an excel, just call an asp page that do the following:
Response.clear
Response.AddHeader "content-disposition", "attachment; filename=myexcel.xls"
Response.ContentType = "application/excel"
'//write the content of the file
Response.write "...."
Response.end
This will a start a download process in the browser without needing to generate a extra call, javascript or anything
See this question for more info on the format you will choose to generate the excel.
Edit
Since Thomas update the question and the real problem is that the file take 4 minutes to generate, the solution could be:
Offer the user the send the file by email (if this is a workable solution in you server or hosting).
Generate the file async, and let the user know when the file generation is done (with an ajax call, like SO does when other user have added an answer)
To generate the file on the server
'//You should change for a random name or something that makes sense
FileName = "C:\temp\myexcel.xls"
FileNumber = FreeFile
Open FileName For Append As #FileNumber
'//generate the content
TheRow = "...."
Print #FileNumber, TheRow
Close #FileNumber
To delete the temp files generated
I use Empty Temp Folders a freeware app that I run daily on the server to take care of temp files generated. (Again, it depends on you server or hosting)
About security
Generate the files using random numbers or GUIds for a light protection. If the data is sensitive, you will need to download the file from a ASP page, but I think that you will be in the same problem again...(waiting 4 minutes to download)
Read file using FSO.
Set headers for Excel file-type, name according to file read and for download (attachment)
Flush response after headers are set. The client should display "save as" dialogue.
Output FSO to response. Client will download file and see progress bar.
How do you plan to generate the Excel? I hope you don't plan to call Excel to do that, as it is unsupported, and generally won't work well.
You should check to see if there are COM components to generate Excel that you can call from Classic ASP. Alternatively, add one ASP.NET page for the purpose. I know for a fact that there are compoonents that can be called from ASP.NET pages to do this. Worse come to worst, there's an Excel exporter component from Infragistics that works with their UltraWebGrid control to export. The grid need not be visible in order to accomplish this, but styles in the grid translate to styles in the spreadsheet. They also allow you to manipulate the spreadsheet programmatically.