Cross-platform end-user-help authoring tools - qt

What are some good authoring tools for creating cross-platform help files for end-users? (Our application is using the Qt framework, if that makes any difference.)
Note: I'm not interested in internal API documentation--we're using doxygen for that.
Ideally, a solution would:
Allow us to manage all help content (text, table of contents, images, etc.) in a single location.
Output to native help formats. (CHM for Windows--or at least something we could feed directly into the HTML Help API; not sure what other platforms' "standard" help formats are.)
Decent WYSIWYG support: handle common text entry, images, cross-references, etc. easily, but we can edit the HTML when we need to.
Text-based file-format for help project (XML, etc.) so that it can be versioned in Subversion.
Any hooks that help keep it in synch with the actual code base would be great. (Perhaps somehow a help topic is associated with a code file, and can check Subversion to see if any changes have been made and flag a topic as "possibly out of date" ... am I dreaming?)
Help content can be localized.
Not opposed to commercial product, but a free option would be nice.
I'll go ahead and make this a wiki and start with a few examples. Vote 'em up or down if you have experience with them, and leave some comments. Add additional tools as well.

I just discovered Sphinx; I think I'm in love.
Better than WYSIWYG over HTML: reStructuredText
Outputs to QtHelp (among other things), so will be easily to distribute (and integrate) in our application.
Not sure about localization yet, but we'll cross that bridge when we need to.
Was easy to set up and "just works"; looks professional.

I have used robohelp for years.
It is fine, but the core technology is very old now. Also the way they lock to Word versions is a total PITA (and has forced me to avoid MS office upgrades several times).
We are moving to madcap flare http://www.madcapsoftware.com/products/flare/robohelp.aspx

I think DocBook addresses all you requirements except possibly the synchronisation hooks, which I'll think a bit further on. It's essentially a subset of XML designed for creating documentation, and is free and open source. It's just a format plus a set of XSL output transforms that convert the Docbook into more useful formats (HTML and thus CHM, JavaHelp, PDF via XML-FO or Tex).
This means that you still need to choose an XML authoring tool to actually edit it so things like WYSIWYG will depend on the features of your XML authoring software. We use Syntext Serna as it has good support for WYSIWYG and inline editing of XML #includes (no-one else seems to support the latter). You may find other XML authoring tools better suit your needs - Serna is an reasonably pricey commercial offering.
Docbook provides a lot of flexibility via profiling, which allows you to include/exclude xml elements based on their attributes. Example use cases would be to have slightly different help output for OS=Windows than OS=Linux. Localization is also supported via profiling and other mechanisms.
A fairly good introduction to Docbook can be found here.
We use Docbook for our help format, and compile it to CHM files that contain help only for the features relevant to a specific product (ie Enterprise edition has features that aren't in the Standard or Demo versions). The relevant steps are:
Run the Profiling XSL templates on the XML Source (using eg XSLTproc).
Run the HTML-Help XSL templates on the output of 1.
Compile the output HTML files using Microsoft's HTML Help Compiler (HHC).

Help & Manual

Robohelp

The only one I know is Latex, one of the latex2html converters, and then a few adaptation to make the resulting html ready for the CHM archiver.
text,html,chm,pdf, ps no problem.
Converting to Word via RTF used to be a disaster, don't know current status.
latex 2 html converters, while several, all have their own problems.
The pdfs look absolutely great.
WYSIWYM (via lyx) possible.
This archive has a bunch of CHMs that way (notably the prog,ref and user parts, the rest (rtl,fcl,lcl) are generated by our own doxygen equivalent, fpdoc)
http://www.stack.nl/~marcov/doc-chm.zip
Note that the above CHMs are made with our own (portable) CHM compiler. Yes, no more workshop.
A Lyx document as PDF and html:
pdf: http://www.stack.nl/~marcov/buildfaq.pdf
html: http://www.stack.nl/~marcov/buildfaq/

Related

CSS-precompiler LESS and/or SASS

Is there a way to avoid working with the command-line installing and using LESS??
There are several offers for GUIs for the compiling-phase, but I did not find a way for the Installation-Phase.
I have been working in the IT-business for so many decades (more in the mainframe and midrange area and as a project-manager and programmer in the application development) and could by now avoid to go as far down to the command-line-world.
I did develop quite fine Websites using HTML5 and CSS3 and doint this I felt a desire for all that, what LESS and/or SASS are offering and the Syntax and logics dont look difficult to handle. But I fail in the first step of just installing it.
The LESS-Website offers command-lines to key in. But I am not sure, if this will be all I have to key in, but only the significant line to be embedded in a sequence of other commands very familiar to all those working at this Level.
How do I e.g. define the place to store the Installation and to refer to in the href in the link-Statement of my html-file .... ??
Thanks
Gerhard (from Vienna/Austria, living in Trier, Germany)
Less is a CSS pre-processor. if you are include less.js in you html page
You can use less directly in to your html page.
Other ways you can use less compiler
Kola this is an open source application it will help you to compile less to css
Your Topics are clear to me. I even downloaded Koala already and I have no Problem in including less.js in my html. And I have read Bass Jobsens book about the Syntax, which does not seem to raise great Problems to me.
But before working with it, I will have to download LESS -what I have done from the Less-Website to the Folder of my choice. My Problem is the next necessary step: To install this downloaded program. There is no install.exe or something like that. The book as well as the info in the less-Website tell me to key some crpytic commands into the command-line.

Is SCORM Package Interchange Format simply a data interchange format or is it more complicated?

I am working on a learning project for mobile devices that requires (or would at least be desirable) the ability to export to a SCORM-compatible format. I see that SCORM has a "Package Interchange Format" (PIF) based around a .zip file. I am new to SCORM and am trying to understand exactly what this file must contain. Specifically, is the PIF file just a format for generating interchangeable data between systems, or is it more complicated than that?
For some context, imagine the use case of a set of questions/sections that a user has to run through on a native mobile app, and at the end, we want to offer the ability for the user to "export" their data in a SCORM-compliant fashion. Is this simply a matter of exporting information about a) the questions and b) the answers into some .xml format, or is there more to it? I notice a lot of the documentation around SCORM seems to focus on Javascript and HTML. Is SCORM HTML specific, or are native apps reconcilable with SCORM, at least from the export perspective?
Apologies if any of this is basic stuff. Just trying to wrap my head around the standard and how it does or does not apply to what I'm doing.
The PIF is really a very small detail of SCORM's packaging. It only says that you can distribute your content in zip format, but not what that should contain.
What a SCORM (1.2) file should contain is described in much, much detail in the SCORM CAM book. To summarize very quickly, you need:
All the files necessary for the content to run (images, html files, javascript files, css etc)
A file called imsmanifest.xml that describes a few things about your content, the files it contains and possibly how they interract with the LMS they run on. It can vary from very simple to very complicated.
Optionally, metadata in XML format
So, SCORM does not care if and where you include your questions and answers. It doesn't know about them. This is your content's responsibility and that should be able to include them and present them to the user, when ran. What SCORM can do is make your content communicate with the LMS you're running it on, so that the results of these questions are persisted.
For now, I'd suggest that you have a look at some existing SCORM files, to get an idea of how the imsmanifest.xml file should look like, and then study the SCORM CAM book and things will get rolling.
The trouble with SCORM is that is has to be launched from within LMS. If you're building an external app that has to communicate to a LMS, take a look at either LTI (http://www.imsglobal.org/toolsinteroperability2.cfm) or TinCanAPI (http://tincanapi.com/).
SCORM 2004 sample https://github.com/cybercussion/SCOBot/
You zip the contents of the directory. Some LMSs expect the imsmanifest.xml to be located in the root of the zip.
Some people are using Native Apps in a LMS format and loading the SCO's into an HTML view, but as stated above SCORM is expecting a JavaScript to JavaScript communication.

PDF to HTML or similar

I'm building an application to view pdf's through a browser without the need of a plugin on mobile devices. I tried ImageMagick and ghostscript to covert the pages to images but they are far too large and text becomes unclear. I see website offering a service of converting pdf's into html and do a descent job but I can't find an example of how this is accomplished. Any help is much appreciated. Thanks!
EDIT: I seem to have read the question backwards. In this case it might be best to parse through the PDF and then format some HTML based on what you find. I believe the javapdf option is capable of this, but I haven't used any of these so I am not sure. If worse comes to worst and you can't find software to disassemble a PDF, you might be able to write your own disassembler in Java or PHP by reading the PDF specification. Best of luck!
http://www.adobe.com/devnet/pdf/pdf_reference.html - PDF Specification (Adobe Modified Version, because they are most popular you may want to support their extensions)
-- OLD -- These websites probably write their own proprietary software to do the trick. If you are truly interested in this undertaking, I would suggest parsing the HTML to get the data and style information and using it to format some sort of PDF writer APIs. A quick Google search yields the following: -- END OLD --
http://www.cutepdf.com/Solutions/
http://ruby-pdf.rubyforge.org/pdf-writer/doc/index.html
http://asprise.com/product/javapdf/
If you are looking at converting PDF to HTML and planning to run the conversion on a server, then you can try pdf2html. It is a program packaged as part of poppler-utils. I do not know how the program accomplishes it.
I was googling and came across the below link explaining how scridb.com implements conversion.
http://coding.scribd.com/2010/06/01/the-perils-of-stacking/

Using .ai (adobe illustrator) variables with Drupal variables

As we know that we can assign the variables to Adobe illustrator file. Is it possible to access these variables by using Drupal 7 variables feature?
Short answer: no.
Longer answer: This is theoretically possible, but.
What you're referring to really amounts to rewriting some of the fundamental underpinnings of the internet. This would require, at minimum, extremely innovative development and entirely recreating several major software components. For example:
1) Users' browsers read and render hypertext. They would need to be rewritten to understand AI equivalents of links, pages, and other internet standards.
2) Javacript, Jquery, and other client-side components would need to be rebuilt from scratch. You would also need to invent a new CSS and DOM that their replacements can understand.
3) Apache .... would be mostly okay with some minor tweaks. One or two new extensions at most.
4) PHP (which stands for "PHP hypertext preprocessor" and not "PHP advanced graphical tool") would need to be entirely redone, along with all of it's extensions, integrations, and fundamental concepts.
5) Drupal and all its modules (which are build on the assumption that the output will be hypertext) would need to be substantially retooled. In particular, you would need a replacement for PHPTemplate that accesses AI objects.
So: There's a lot to do. I would say "let's get started," except that 6) AI is a proprietary product and we don't have licenses to develop and extend it.
I think it depends on what you want to do with the resulting file. Are you thinking of a variable data Illustrator document that would be generated from the values in Drupal? If so, I think that is very possible. The Illustrator file format specification is somewhat available, you would just need to process the file without the help of Illustrator (which may present some challenges).
If you want to generate something that would be viewable in a browser you're better off using SVG with some sort of placeholder that you could replace with regular expressions or maybe some XPath queries to get the nodes you want to manipulate and then adding your values there.
As Dominic said, the end use isn't very clear here so the answers are really dependent on what you need to accomplish.

ASP.NET library to extract plain text from Open XML file formats

Is there a pre-existing library to extract plain text form Open XML file formats (e.g. docx, pptx, and xlsx) files?
I require this to populate a lucene.net index.
I've found this example which extracts text from docx and it seems to work okay. But before building my own solution based on this I was wondering if there's something already available for the other file formats?
Before spending cash, it may be worth looking at the IFilter interface - these were/are designed to do exactly what you want.
http://msdn.microsoft.com/en-us/library/ms691105
http://www.codeproject.com/KB/cs/IFilter.aspx
(Some links at the bottom of the codeprject link).
MS provide IFilters for office file types.
http://www.microsoft.com/downloads/details.aspx?familyid=60c92a37-719c-4077-b5c6-cac34f4227cc&displaylang=en
I know that we use this technology to allow us to index PDFs using Lucene but I did not write the actual code and cannot be of much use I am afraid.
If your Google-fu is strong I am sure you can dig up more examples of using IFilters to do exactly what you want.
watch aspose.com, they have a good library to handle both ppt and pptx.
You can try Toxy, an open source text/data extraction framework for .NET. For now, it supports xls, xlsx, doc, docx. It will support pptx in version 1.5 very soon.
For detail, you can check here

Resources