I would like to find out how I can import external libraries into my tests? For example, if i use a Java library for random name/number generation, how do I go about using it in my tests?
Thanks
Before I answer, I would advise that you should avoid using Java code, if you can. For example, a random name/number generator is very easy to implement in JavaScript and you can find plenty of ready-made examples out there. If it's JS code, you can easily embed it in your tests using one of the techniques described here. Even better, you should use capabilities that are provided out-of-the-box with OpenTest: $random and $randomString.
If you really need to use Java code, there are two ways to do it:
The recommended way: create one or more custom OpenTest keywords as described here. This will make it easier for you to maintain your test suite in the future and it also makes it easier for other members of your team to leverage this work in their own tests, especially if they are not familiar with Java.
The "quick and dirty" way: create a user-jars directory in your test actor's working directory and drop the JAR file in there. Then, call your Java code from JavaScript as described here.
Related
When building the GRPC libraries from sources, for example on Android, I counter the following issues:
I have to remove libgrpc_unsecure and libgrpc++_unsecure in order
for the initialization of GRPC not to get stuck.
I see that there are two libraries: libprotobuf and libprotobuf-lite.
Which is the differences between them(other than the fact that
probably the lite version contains less functions), which one I
should include?
When generating the .so libraries it is generating also the .a
libraries and if I use the .a libraries a function is not found, so I
have to get back to using the .so, but in that case should I also use
the .a? If not, is there a way to build just the .so?
Is there a link where it specifies the purpose of each library and what should be used? For example I don't think grpc++_reflection is of some use in my case, but how do I know what it contains without having to pass through every symbol in it? I need to better understand how to use the library files.
Yes, libgrpc and libgrpc_unsecure are mutually exclusive. So you need to choose one as a dependency of your application.
Your interpretation is correct. lite version has less feature so you can try lite first and switch to the regular one if it doesn't fit. You may want to check this https://github.com/protocolbuffers/protobuf/blob/main/src/file_lists.cmake to see what's available and what isn't in the lite.
It depends on how you built gRPC.
gRPC has a couple of libraries but grpc++ is the one you want to link against. I don't think it has a comprehensive doc for what is for what so checking out https://github.com/grpc/grpc/blob/master/CMakeLists.txt is the best thing you can do to understand what features those libraries provide.
I'm using openapi-generator for jersey-jaxrs (OpenAPI 3.0). I'd like to control the package where my code is being generated.
I'm setting the api-package, model-package, package-name, and invoker-package options, all to a xxx.yyy.zzz value.
My problem is that most of the code is generated under gen.xxx.yyy.zzz, and it's not discoverable by the part of the code generated under xxx.yyy.zzz. Implicitly, gen is prepended to the package name. I understand this is convenient in many cases, but not mine. Is there any generator option to avoid this?
I've learned a bit about the Mustache templates and they seem like a possible solution, but maybe a bit too much for my requirements.
Ultimately, I can move the code in gen to the other (non-gen) package manually, and it works, but this is quite inconvenient.
Finally, I found out that you can mark folders in IntelliJ IDEA as "generated sources root", which makes it discoverable to the rest of the project's code.
This doesn't solve my question, but it does solve the problem that originated the question.
when I develop a custom language IDE using avalonedit, I encountered a problem. I use regex to check the syntax, and it works as designed. However, I want to show the syntax error with wave text mark. I did search at google, yet the solution is either outdated or not feasible. Any ideas? Thanks ahead.
AvalonEdit does not have this functionality built-in. However it provides all the infrastructure needed to implement it yourself. In the SharpDevelop IDE we have an implementation that should suit your needs.
You'll need some of the code from the SharpDevelop repository (https://github.com/icsharpcode/SharpDevelop/):
TextMarkerService, TextMarker
and its related interfaces and enums.
To make it easier for you, I have created a small sample application:
https://github.com/siegfriedpammer/AvalonEditSamples/tree/master/TextMarkerSample
It uses the AvalonEdit 5 nuget package and contains the classes mentioned above, plus a WPF Window to test it.
I have a rather big library with a significant set of APIs that I need to expose. In fact, I'd like to expose the whole thing. There is a lot of namespacing going on, like:
FooLibrary.Bar
FooLibrary.Qux.Rumps
FooLibrary.Qux.Scrooge
..
Basically, what I would like to do is make sure that the user can access that whole namespace. I have had a whole bunch of trouble with this, and I'm totally new to closure, so I thought I'd ask for some input.
First, I need closurebuilder.py to send the full list of files to the closure compiler. This doesn't seem supported: --namespace Foo does not include Foo.Bar. --input only allows a single file, not a directory. Nor can I simply send my list of files to the closure compiler directly, because my code is also requiring things like "goog.assers", so I do need the resolver.
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
This is my main issue.
However, later the closure compiler, with ADVANCED_OPTIMIZATIONS on, will optimize all these names away. Now I can fix that by adding "#export" all over the place, which I am not happy about, but should work. I suppose it would also be valid to use an extern here. Or I could simply disable advanced optimizations.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
Finally, for working in source mode, I need to do goog.require() for every namespace I am using. This is merely an inconvenience, though I am mentioning because it sort of related to my trouble above. I would prefer to be able to do:
goog.requireRecursively('FooLibrary')
in order to pull all the child namespaces as well; thus, recreating with a single command the environment that I have when I am using the compiled version of my library.
I feel like I am possibly misunderstanding some things, or how Closure is supposed to be used. I'd be interested in looking at other Closure-based libraries to see how they solve this.
You are discovering that Closure-compiler is built more for the end consumer and not as much for the library author.
If you are exporting basically everything, then you would be better off with SIMPLE_OPTIMIZATIONS. I would still highly encourage you to maintain compatibility of your library with ADVANCED_OPTIMIZATIONS so that users can compile the library source with their project.
First, I need closurebuilder.py to send the full list of files to the closure compiler. ...
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
You would need to specify an --root of your source folder and specify the namespaces of the leaf nodes of your file dependency tree. You may have better luck with the now deprecated CalcDeps.py script. I still use it for some projects.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
You can't do that because it only makes sense based on the final usage. You as the library writer wish to export everything, but perhaps a consumer of your library wishes to include the source (uncompiled) version and have more dead code elimination. Library authors are stuck in a kind of middle ground between SIMPLE and ADVANCED optimization levels.
What I have done for this case is maintain a separate exports file for my namespace that exports everything. When compiling a standalone version of my library for distribution, the exports file is included in the compilation. However I can still include the library source (without the exports) into a project and get full dead code elimination. The work/payoff balance of this though must be weighed against just using SIMPLE_OPTIMIZATIONS for the standalone library.
My GeolocationMarker library has an example of this strategy.
There is probably no way for this but does anyone know a method of excluding certain functions from a build by use of a meta tag and or compiler option?
I want to expose some functions for testing but not have them bloat the application on production. I could create separate testing classes and test for a complier directive or option and only load them if necessary but I like the idea of having the test function on the actual object (in the class).
Thanks
Ronan
You have to look at conditional compilation for example look to this blog post http://www.pixelate.de/blog/debug-and-release-builds-with-as3-conditional-compilation
You can use conditional compilation for that.