Robot Framework resource and library file difference - robotframework

What is the difference between resource and library file in robot framework?
I searched google but couldn't able to find the answer

The resource file content is in the Robot Framework syntax. When it's imported in a suite, you can use all its keywords and variables, defined in the corresponding sections. Also all its imports (other Resource and Library it defines in the Settings section) are now available for usage.
The libraries on the other hand are (usually) written in the Python language. They can be ones installed through pip, or standalone scripts or modules. In the simplest case, all public functions of a module (more specifically - not-hidden) are available as keywords to be used in the suite. For more advanced usage (scope, state upkeep), they have to follow specific structure (usually accomplished through classes, and using identifiers/decorators expected by RF).
There is a third type of import, for which you haven't asked but I'm adding for completeness - the Variables files. Their format is once again Python code, which makes them quite versatile and powerful compared to vars defined in RF syntax (you can set the variables' content through complex programming constructs).
One caveat to keep in mind with them - the framework expects every attribute of the module to be a variable, and makes it accessible in your suite; this includes even other modules the file imports :). Thus you have to hide them through the _ name suffix (or, abuse this side effect for silent imports in some exotic cases :)).
I've included links to the relevant sections of the user guide, for further information.

Related

What's the difference between the include and import statement in NETCONF (.Yin/Yang files)

I understand that you can create a separate yang file (Something like a textual Convention to store syntax values for MIBS)and import it into another yang file to make the data more organised and structured, but I can't seem to understand what the include statement does differently?
Does it "import" the entire file into the file that's including it - and if so would this be read before the file including it...?
Please help :)
YANG relies heavily on a concept known as "namespaces", which stems from XML naming conventions. Each namespace has a unique resource identifier and allows definitions (in different namespaces) to have same names at same definition levels while avoiding name clashes. When you define a YANG module, you are actually defining a namespace.
An import statement is used to access definitions from a foreign namespace (another module), while an include statement introduces a mechanism that allows a single namespace (single module) to be logically split into several files, conveniently named module and submodules. For includes, there is always exactly one module file, which includes all submodule files that belong to it. A submodule may only belong to a single module and may not be imported (directly). To an importing module, a module that includes submodules, looks like a single entity. Submodules may include eachother, but with YANG version 1.1, this has become unnecessary, since a submodule immediately gains access to all definitions in all submodules and the module they all belong to. In YANG version 1 you had to explicitly include a submodule to use definitions from it in another submodule, while never being able to access definitions in the module to which they belonged.
An import does not "inline" definitions into the importing module, while an include does exactly that. Importing a module gives you access to its top-level definitions (typedefs, groupings, idenitites, features and extensions) and allows you to use schema node identifiers that identify nodes in the imported module (for the purpose of augmentation and deviation, for example).
Definitions from a foreign namespace are always accessed via a prefix, which are part of an import statement's definition. Definitions that come from includes do not need to be prefixed when used, and if they are, are prefixed with the including module's (or submodule's) prefix.
YANG "compilers" usually process these files when they hit either an import or an include statement. They need to process them in order to be able to resolve definitions in body statements of the defining module. That is why these statements are required to appear in a module's header section.
There is an entire section in YANG specification dedicated to modules and submodules, where you can read more on the topic.

Adding flow definiton to typescript library

I have written a library called redux-async-action-reducer. I have written it in typescript. I want to add flow definition to it.
Is there any way I can keep it along with my library rather than creating a separate and putting it in flow-typed?
Something like d.ts for flow defintion files?
You could ship your library with a .js.flow file alongside your package entry point. In your case (since your package entry point is dist/index.js you would create a file at dist/index.js.flow.
Flow will then treat this like a normal source file. You'll have to remember to put // #flow at the top. You can either write functions and classes with stubbed out implementations, or use declare (e.g. declare export function foo(x: string): string;, similar for class).
Note that this will actually be different than a library definition file -- Flow will treat it like source code.
Flow-typed is the preferred way to distribute libdefs. Using .js.flow files can lead to issues when Flow makes breaking changes between versions. However, since you will be distributing a hand-curated interface, rather than shipping your entire library source as .js.flow files, that issue will be mitigated.

Visualization of xsd Dependencies?

I have a bunch of XSD Files which I did not write myself. The files sometimes import each other:
<xs:import namespace="http://www.mysite.com/xmlns/xXX-YYYY/V" schemaLocation="http://www.mysite.com/xmlns/xXX-YYYY/V/schema_A.xsd"/>
and I would like to get an overview of the dependencies without having to read through all of them.
The URI specified by schemaLocation does not exist, instead a catalog.xml File is used to resolve the schema locations.
http://de.wikipedia.org/wiki/XML_Catalogs
Can anybody recommend a tool that can visualize the dependencies of my schemas by also processing the information given in the catalog.xml file?
Thanks
Mischa
To follow up on my comment...
I am not aware of any tool that takes into account OASIS catalog files. Have a look at this response, see if it supports what you need (and your platform).
Strictly speaking, there are a number of issues with dependencies diagrams, which is why such a question should be qualified with why do you want it.
Some think that it truly shows dependencies between XSD files; it is not true: it may show what the author thinks the dependencies are, but that wouldn't be what the processor actually agrees to. "schemaLocation" is just a hint that processors may or may not use: "may not" use it if they're instructed otherwise (well known XSDs could be cached internally, through catalog entries or any other proprietary "catalogs"), or because the processor may decide there is no need to load an external reference when there's no use for it anyway (it may happen in some corner cases).
A diagram built as described by explicit schema locations is definitely easier to do. It only shows what the author intended; it doesn't mean that it is the "real one" (as in content is pulled indirectly, which makes the whole XSD set valid, while individual XSDs, open independently of the set, would be invalid).
Trying to build a diagram where dangling or non existing schemaLocation are overridden through a catalog, is way harder, due to the multitude of ways to structure the content, and the resolution mechanism. It would have the same shortcoming as the one above (except now the author is the one of the catalog file, rather than who authored the XSDs).
The "true" dependency can be built by traversing a schema set already loaded and compiled. Even then, you would still need to define criteria regarding dependencies due to substitutable components (elements in substitution groups or derived types, through the use of the xsi:type attribute). That is even harder.
Take a look at this tool: DocFlex/XML XSDDoc.
It is an XML schema documentation generator.
It doesn't visualize xsd dependencies, but it does work with XML catalogs.
The overview of each XSD file lists all other XSD files referenced from it
(i.e. imported, included or redefined).
There is also an opposite list of those schemas that reference the given one.
So, you can use it to figure out which XSD files depend on which.
At least, that will be easier than reading raw XSD files.
As an example, here is a documentation generated with that tool:
XML Schemas for DITA 1.1. It has been generated basically by two files:
http://docs.oasis-open.org/dita/v1.1/OS/schema/ditaarch.xsd
http://docs.oasis-open.org/dita/v1.1/OS/schema/catalog.xml
ditaarch.xsd is the schema driver that pulls all other schemas (25 in total); catalog.xml is the XML catalog, via which all file references are resolved.
What is specified in schemaLocation attributes in those schemas themselves are just opaque URIs.

cmake: qt resources inside a module

i have this tree structure:
repository/modules/module1
repository/modules/module2
repository/modules/module..
repository/apps/application1
repository/apps/application2
repository/apps/application..
where the applications are using some modules.
now, I'd like to put some resources inside a module (like a very colorfull icons inside a widget used by several applications) but.. something gets wrong.
inside the module CMakeLists.txt if I use only:
set(${MODULE_NAME}_RCS
colors.qrc
)
...
qt4_add_resources (${MODULE_NAME}_RHEADERS ${${MODULE_NAME}_RCS})
no qrc_colors.cxx are created anywhere. so I've tried to add:
ADD_EXECUTABLE (${MODULE_NAME}
${${MODULE_NAME}_RHEADERS}
)
but.. I get this weird error:
CMake Error at repo/modules/ColorModule/CMakeLists.txt:51 (ADD_EXECUTABLE):
add_executable cannot create target "ColorModule" because another
target with the same name already exists. The existing target is a static
library created in source directory
"repo/modules/ColorModule". See documentation for
policy CMP0002 for more details.
(I've changed the path of the error of course)
so.. don't know what to think because i'm new both to cmake and qt..
what can i try?
EDIT:
if I add the ${MODULE_NAME}_RHEADERS and ${MODULE_NAME}_RCS in the add_library command the qrc_colors.cxx is created BUT it is in repository/modules/module1/built and not copied in the application built directory...
There is at least two errors in your code.
1) It is usually not necessary to use ${MODULE_NAME} everywhere like that, just "MODULE_NAME". You can see that the difference is the raw string vs. variable. It is usually recommended to avoid double variable value dereference if possible.
2) More importantly, you seem to be setting ${MODULE_NAME} in more than one executable place, which is "ColorModule" according to the error output. You should have individual executable names for different binaries.
Also, the resource file focus is a bit of red herring in here. There are several other issues with your project.
You can cmake files as CmakeLists.txt instead of CMakeLists.txt which inherently causes issues on case sensitive systes as my Linux box.
You use Findfoo.cmake, and find_package(foo) for that matter, rather than the usual FindFoo.cmake convention alongside find_package(Foo).
Your FindFoo.cmake is quite odd, and you should probably be rewritten.
Most importantly, you should use config files rather than find modules.
Documentation and examples can be found at these places:
http://www.cmake.org/Wiki/CMake/Tutorials#CMake_Packages
https://projects.kde.org/projects/kde/kdeexamples/repository/revisions/master/show/buildsystem
When you would like use a find module, you need to have that at hand already. That will tell you what to look for, where things are, or if they are not anywhere where necessary. It is not something that you should write. You should just reuse existing ones for those projects that are not using cmake, and hence the find modules are added separately.
It is a bit like putting the treasure map just next to the treasure. Do you understand the irony? :) Once you find the map, you would automatically have the treasure as well. i.e. you would not look for it anymore.

Closure: --namespace Foo does not include Foo.Bar, and related issues

I have a rather big library with a significant set of APIs that I need to expose. In fact, I'd like to expose the whole thing. There is a lot of namespacing going on, like:
FooLibrary.Bar
FooLibrary.Qux.Rumps
FooLibrary.Qux.Scrooge
..
Basically, what I would like to do is make sure that the user can access that whole namespace. I have had a whole bunch of trouble with this, and I'm totally new to closure, so I thought I'd ask for some input.
First, I need closurebuilder.py to send the full list of files to the closure compiler. This doesn't seem supported: --namespace Foo does not include Foo.Bar. --input only allows a single file, not a directory. Nor can I simply send my list of files to the closure compiler directly, because my code is also requiring things like "goog.assers", so I do need the resolver.
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
This is my main issue.
However, later the closure compiler, with ADVANCED_OPTIMIZATIONS on, will optimize all these names away. Now I can fix that by adding "#export" all over the place, which I am not happy about, but should work. I suppose it would also be valid to use an extern here. Or I could simply disable advanced optimizations.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
Finally, for working in source mode, I need to do goog.require() for every namespace I am using. This is merely an inconvenience, though I am mentioning because it sort of related to my trouble above. I would prefer to be able to do:
goog.requireRecursively('FooLibrary')
in order to pull all the child namespaces as well; thus, recreating with a single command the environment that I have when I am using the compiled version of my library.
I feel like I am possibly misunderstanding some things, or how Closure is supposed to be used. I'd be interested in looking at other Closure-based libraries to see how they solve this.
You are discovering that Closure-compiler is built more for the end consumer and not as much for the library author.
If you are exporting basically everything, then you would be better off with SIMPLE_OPTIMIZATIONS. I would still highly encourage you to maintain compatibility of your library with ADVANCED_OPTIMIZATIONS so that users can compile the library source with their project.
First, I need closurebuilder.py to send the full list of files to the closure compiler. ...
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
You would need to specify an --root of your source folder and specify the namespaces of the leaf nodes of your file dependency tree. You may have better luck with the now deprecated CalcDeps.py script. I still use it for some projects.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
You can't do that because it only makes sense based on the final usage. You as the library writer wish to export everything, but perhaps a consumer of your library wishes to include the source (uncompiled) version and have more dead code elimination. Library authors are stuck in a kind of middle ground between SIMPLE and ADVANCED optimization levels.
What I have done for this case is maintain a separate exports file for my namespace that exports everything. When compiling a standalone version of my library for distribution, the exports file is included in the compilation. However I can still include the library source (without the exports) into a project and get full dead code elimination. The work/payoff balance of this though must be weighed against just using SIMPLE_OPTIMIZATIONS for the standalone library.
My GeolocationMarker library has an example of this strategy.

Resources