The DocUtils include directive allows the inclusion of arbitrary text into an RST document. The problem is that the implementation restricts the specified files to be in the same document as is the location of the included file. This makes the use of includes for external files that may be used in many documents difficult. The standard DocUtils solution is to use something called Standard Definition Files. Unfortunately, these require anyone using this facility to have the DocUtils source code involved which will affect every end-user who needs this facility. Needing source code is not an acceptable solution for people who are not programmers and are only trying to use Docutils, possibly in an environment such as Sphinx. Is there any other work-around for this situation?
Use literalinclude directive using .. to navigate up and around the filesystem. For example:
.. literalinclude:: ../../external-to-project-directory/src/somefile.py
Will include somefile.py as literal code into your documentation.
If you want to parse the code as rst, then use include instead.
Related
What is the difference between resource and library file in robot framework?
I searched google but couldn't able to find the answer
The resource file content is in the Robot Framework syntax. When it's imported in a suite, you can use all its keywords and variables, defined in the corresponding sections. Also all its imports (other Resource and Library it defines in the Settings section) are now available for usage.
The libraries on the other hand are (usually) written in the Python language. They can be ones installed through pip, or standalone scripts or modules. In the simplest case, all public functions of a module (more specifically - not-hidden) are available as keywords to be used in the suite. For more advanced usage (scope, state upkeep), they have to follow specific structure (usually accomplished through classes, and using identifiers/decorators expected by RF).
There is a third type of import, for which you haven't asked but I'm adding for completeness - the Variables files. Their format is once again Python code, which makes them quite versatile and powerful compared to vars defined in RF syntax (you can set the variables' content through complex programming constructs).
One caveat to keep in mind with them - the framework expects every attribute of the module to be a variable, and makes it accessible in your suite; this includes even other modules the file imports :). Thus you have to hide them through the _ name suffix (or, abuse this side effect for silent imports in some exotic cases :)).
I've included links to the relevant sections of the user guide, for further information.
I have a bunch of XSD Files which I did not write myself. The files sometimes import each other:
<xs:import namespace="http://www.mysite.com/xmlns/xXX-YYYY/V" schemaLocation="http://www.mysite.com/xmlns/xXX-YYYY/V/schema_A.xsd"/>
and I would like to get an overview of the dependencies without having to read through all of them.
The URI specified by schemaLocation does not exist, instead a catalog.xml File is used to resolve the schema locations.
http://de.wikipedia.org/wiki/XML_Catalogs
Can anybody recommend a tool that can visualize the dependencies of my schemas by also processing the information given in the catalog.xml file?
Thanks
Mischa
To follow up on my comment...
I am not aware of any tool that takes into account OASIS catalog files. Have a look at this response, see if it supports what you need (and your platform).
Strictly speaking, there are a number of issues with dependencies diagrams, which is why such a question should be qualified with why do you want it.
Some think that it truly shows dependencies between XSD files; it is not true: it may show what the author thinks the dependencies are, but that wouldn't be what the processor actually agrees to. "schemaLocation" is just a hint that processors may or may not use: "may not" use it if they're instructed otherwise (well known XSDs could be cached internally, through catalog entries or any other proprietary "catalogs"), or because the processor may decide there is no need to load an external reference when there's no use for it anyway (it may happen in some corner cases).
A diagram built as described by explicit schema locations is definitely easier to do. It only shows what the author intended; it doesn't mean that it is the "real one" (as in content is pulled indirectly, which makes the whole XSD set valid, while individual XSDs, open independently of the set, would be invalid).
Trying to build a diagram where dangling or non existing schemaLocation are overridden through a catalog, is way harder, due to the multitude of ways to structure the content, and the resolution mechanism. It would have the same shortcoming as the one above (except now the author is the one of the catalog file, rather than who authored the XSDs).
The "true" dependency can be built by traversing a schema set already loaded and compiled. Even then, you would still need to define criteria regarding dependencies due to substitutable components (elements in substitution groups or derived types, through the use of the xsi:type attribute). That is even harder.
Take a look at this tool: DocFlex/XML XSDDoc.
It is an XML schema documentation generator.
It doesn't visualize xsd dependencies, but it does work with XML catalogs.
The overview of each XSD file lists all other XSD files referenced from it
(i.e. imported, included or redefined).
There is also an opposite list of those schemas that reference the given one.
So, you can use it to figure out which XSD files depend on which.
At least, that will be easier than reading raw XSD files.
As an example, here is a documentation generated with that tool:
XML Schemas for DITA 1.1. It has been generated basically by two files:
http://docs.oasis-open.org/dita/v1.1/OS/schema/ditaarch.xsd
http://docs.oasis-open.org/dita/v1.1/OS/schema/catalog.xml
ditaarch.xsd is the schema driver that pulls all other schemas (25 in total); catalog.xml is the XML catalog, via which all file references are resolved.
What is specified in schemaLocation attributes in those schemas themselves are just opaque URIs.
I have a rather big library with a significant set of APIs that I need to expose. In fact, I'd like to expose the whole thing. There is a lot of namespacing going on, like:
FooLibrary.Bar
FooLibrary.Qux.Rumps
FooLibrary.Qux.Scrooge
..
Basically, what I would like to do is make sure that the user can access that whole namespace. I have had a whole bunch of trouble with this, and I'm totally new to closure, so I thought I'd ask for some input.
First, I need closurebuilder.py to send the full list of files to the closure compiler. This doesn't seem supported: --namespace Foo does not include Foo.Bar. --input only allows a single file, not a directory. Nor can I simply send my list of files to the closure compiler directly, because my code is also requiring things like "goog.assers", so I do need the resolver.
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
This is my main issue.
However, later the closure compiler, with ADVANCED_OPTIMIZATIONS on, will optimize all these names away. Now I can fix that by adding "#export" all over the place, which I am not happy about, but should work. I suppose it would also be valid to use an extern here. Or I could simply disable advanced optimizations.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
Finally, for working in source mode, I need to do goog.require() for every namespace I am using. This is merely an inconvenience, though I am mentioning because it sort of related to my trouble above. I would prefer to be able to do:
goog.requireRecursively('FooLibrary')
in order to pull all the child namespaces as well; thus, recreating with a single command the environment that I have when I am using the compiled version of my library.
I feel like I am possibly misunderstanding some things, or how Closure is supposed to be used. I'd be interested in looking at other Closure-based libraries to see how they solve this.
You are discovering that Closure-compiler is built more for the end consumer and not as much for the library author.
If you are exporting basically everything, then you would be better off with SIMPLE_OPTIMIZATIONS. I would still highly encourage you to maintain compatibility of your library with ADVANCED_OPTIMIZATIONS so that users can compile the library source with their project.
First, I need closurebuilder.py to send the full list of files to the closure compiler. ...
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
You would need to specify an --root of your source folder and specify the namespaces of the leaf nodes of your file dependency tree. You may have better luck with the now deprecated CalcDeps.py script. I still use it for some projects.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
You can't do that because it only makes sense based on the final usage. You as the library writer wish to export everything, but perhaps a consumer of your library wishes to include the source (uncompiled) version and have more dead code elimination. Library authors are stuck in a kind of middle ground between SIMPLE and ADVANCED optimization levels.
What I have done for this case is maintain a separate exports file for my namespace that exports everything. When compiling a standalone version of my library for distribution, the exports file is included in the compilation. However I can still include the library source (without the exports) into a project and get full dead code elimination. The work/payoff balance of this though must be weighed against just using SIMPLE_OPTIMIZATIONS for the standalone library.
My GeolocationMarker library has an example of this strategy.
I've noticed that there's seems to be this unwritten convention that when posting a file to a server to call the file in the request "qqfile". Googling away I can't seem to find any explination as to why "qqfile" was chosen.
Is there some reason rooted in history as to why this name is used?
Because one of the popular AJAX file uploading components uses the namespace qq and parameter qqfile. Prefixing file with the namespace was probably to avoid clashes with other parameters in a form. I don't know why author prefered qq as the namespace for his component though, because it resembles neither file upload nor his website, valums.com. He could have wanted something that is short like $ that doesn't conflict with existing popular libraries.
https://github.com/valums/file-uploader
I'm analyzing some legacy code. It is about 80.000 lines of old plsql code. On a fist look there is quite some duplication in the source which needs to be removed. Instead off doing diff's manual and looking at each file there must be some tool/commandline confu out there to detect duplicate lines of source code.
My goal is to make an educated guess about the minimal size of a rewrite of source and about how much actual knowledge is captured in this program. I wrote some a basic static code analyzer to find the amount of control statements IF ELSE FOR etc and Functions in each file.
But duplicated code still needs to be removed from my statistics.
Have you looked at Simian - Similarity Analyser? (Just checked and it's no longer free, but it is available for a period of 15 days for evaluation purposes.)
Simian (Similarity Analyser)
identifies duplication in Java, C#, C,
C++, COBOL, Ruby, JSP, ASP, HTML, XML,
Visual Basic, Groovy source code and
even plain text files. In fact, simian
can be used on any human readable
files such as ini files, deployment
descriptors, you name it.
I have used it in practice and it does work well.
Sonar has duplication detection and claims to support PL/SQL, though I've never used it for that.
You would need to beg/borrow/steal/write a plsql parser and compare the resulting abstract syntax trees. With the size of the code base you have, that might be worthwhile. There would be other uses for the parser once you're done.
How about this:
http://sourceforge.net/projects/sddforeclipse/
It is opensource, and is said to be used by commercial software. It is a plugin to Eclipse, by the way.