I need your help over a problem I have. Actually, I have a page with a simple embed which displays a PDF file.
I got a request to add another PDF file to the same embed (or at least to do something which would look like it).
I searched some solutions and not finding a simple one, I'm thinking about using iTextSharp to merge both files (by getting their stream from their url), merging them into a new pdf file and display this resulting file into the embed.
But I'm just telling myself it's a bit too much for such a simple modification... And so I'm here asking you if someone would have a better idea ? From what I searched on stackoverflow and google it looks like I will have to take the merge solution but hey, we never know '^^
A simpler option would be to merge the two PDF files using either a free online tool or Adobe Combine Files option and then adding that newly combined PDF to your site. Unless I am missing something, there is no real reason or benefit to do this using code.
Related
I am writing a web app, and not sure about how to upload multiple images. I want to make sure that either all images are uploaded successfully, or none of them.
One way is to upload them simultaneously in several thread. However, if one of the uploading process fails, it is difficult to cancel other process and remove uploaded images.
The solution I come up with is to combine the multiple files into a bigger file using some separator, and then server then separates it into original files.
Is my solution appropriate? Or are there any better solutions?
You will need some functional to check all files after all files have been uploaded. The simplest way - using standart form/multipart POST with all files in one request. But of course it is not so elegant. Another way - using one of popular and well tested plugin, for example - https://github.com/blueimp/jQuery-File-Upload
I want to use Neo4j to store a number of graphs I created in python. I was using Gephi for visualization, and I thought the export to Neo4j plugin would be a very simple way to get the data across. The problem is that the server is seemingly not recognizing the neostore...db files that Gephi generated.
I'm guessing I configured things incorrectly, but is there a way to fix that?
Alternatively, I'm also open to importing the files directly. I have two files: one with node titles and attributes and another with an edge list of title to title.
I'm guessing that I would need to convert the titles to ids, right? What would be the fastest way to do that?
Thank you in advance!
If you have the file as tab separated csv files, feel free to import them directly. There are some options, check out this page: http://www.neo4j.org/develop/import
Especially the CSV batch importer can help you: http://maxdemarzi.com/2012/02/28/batch-importer-part-1/
Or if it is just a little bit of data, use the spreadsheet approach: http://blog.neo4j.org/2013/03/importing-data-into-neo4j-spreadsheet.html
Please report back if you were successful.
I used Gephi to generate a neo4j store file directory in the past - it worked like a charm...
I assume you did delete the default graph.db directory and renamed your gephi-generated directory to graph.db? That worked for me...
There is a page I want to scrape, you can pass it variables in the URL and it generates specific content. All the content is in a giant HTML table.
I am looking for a way to write a script that can go through 180 of these different pages, extract specific information from certain columns in the table, do some math, and then write them to a .csv file. That way I can do further analysis myself on the data.
What is the easiest way to scrape webpages, parse HTML and then store the data to a .csv file?
I have done stuff similar in python and PHP, the parsing of HTML is not the most easiest thing to do, or cleanest. Are there other routes that are easier?
If you have some experience with python, I would recommend something like BeautifulSoup, or in PHP you can use PhPQuery.
Once you know how to use the HTML-parser, then you can create a "pipes-and-filter" program to do the math and dump it to a csv file.
Have a look at this question for more info on a Python solution.
Can you use PurePDF to view files or is the api only for writing them?
Based on the PurePDF Project Page, reading and extracting information from PDFs is supported:
read existing pdf documents (extract strings, streams, images and all the informations from them). See HelloWorldReader.as for an example
However, if you're looking to view / rasterize a PDF, that's a much more complicated task and doesn't look like it's supported as part of PurePDF.
I suggest converting the PDF into a swf file. There are a number of projects out there (including free / open source) that convert pages into SWF files, including being able to still extract the text. :D
It looks like you can either navigate to the url of the PDF (maybe in an HTML component?) , OR a richer solution might be to use the open source flex paper : http://flexpaper.devaldi.com/
I found this tool but I wonder if it still the right way nowdays with net 4.0 or is there any straight forward oob alternatives.
I just need to add columns and update excel stuff programatically. There are many ways but I need to keep the original document as a template. The link above explains exactly what the requeriments are and why they created such "ExcelPackage" library.
A quick look at the link you provided seems like it will in fact keep the original template intact and just return a populated version of that template. This is a pretty common way to create and populate Excel documents using Open XML since it helps to minimize the amount of code you have to write. If you did not specify the layout, styles, formats, etc in a template you would be forced to define those when coding and that could lead to some bloated code. Overall, a project like this or using the Open XML SDK 2.0 to create the documents is the way to go.