Call remote test library constructor as keyword from Robot Framework - robotframework

Is it possible to call Test Library constructor from Robot Framework?
Using Remote library interface (NRobot.Server) to connect from RF to Test Library (implemented in C#).
Currently its exposing all public methods implemented under Test Library except constructors.
There are multiple Test Libraries in our project where some functionality implemented as part of constructors.
Hence need a way to call constructor as a test step to execute certain functionality whenever required.
If not possible then may need to move functionality from constructors to new public methods. But want to avoid that if possible.
Thanks in advance...

In short - no.
When calling a remote library, you're actually just the client in an XML-RPC comm protocol; it is the server's responsibility to have the library instantiated, so it (the very same library) can process your instructions and act as needed. Thus normally the library is already instantiated when you call it from your RF code - too late to invoke its constructor.
Naturally, this can be implemented differently - for the remote library server to instantiate the target library on a (special) call, and thus you'll to be able to provide constructor arguments, but that is library design/code change required in it.
This is in contrast of using local libraries, where they are instantiated in your local interpreter, on their import.

Related

Calling a DBus serivce from a pure QML plasmoid

Could anybody suggest me a tutorial or example how to call a custom DBus service (on the system bus) from a pure QML KDE plasmoid, without an intermediate layer? Is it possible at all?
The standard plasmoids like org.kde.plasma.battery or org.kde.kscreen use the DataSource approach, which likely requires a shared library or other code in C++. Other plasmoids invoke wrappers (e.g. in python) to talk with a DBus service.
So the question: is it possible to call a given D-Bus service directly from QML/JS without an intermediate layer, at most by importing some (standard) KDE/QT/plasma library?

Azure function: This method cannot be called during the application's pre-start initialization phase

I've created an Azure function and have created a class which implements the IExtensionConfigProvider interface with the Initialize method to do some "bootstrapping" on "start up". Included in the bootstapping is some Unity registration where I use the BuildManager.GetReferencedAssemblies() to complete convention registrations.
However I'm getting This method cannot be called during the application's pre-start initialization phase error when BuildManager.GetReferencedAssemblies() is called. I've even put this method in the actual function code (so not "start up") and still get this error. Any ideas?
It sounds like you're trying to do something that's not supported. IExtensionConfigProvider.Initialize() is a very protected context and you can't really control when it runs - it should just be used to register binding rules. Azure Functions does not yet have dependency injection support.
If you have code that you want to run early, you could put it in a static constructor.
If you want more control, you could switch to using the underlying WebJobs SDK (which allows you to own the main function). Functions is layered on top of WebJobs SDK.

How do I get access to Castle DynamicProxy generation options within MOQ?

Does MOQ give access to Castle's DynamicProxy generation? Or are there configurable static methods or something in the Castle namespace that would allow me to tune MOQ's proxy gen behavior?
Some Background
I am Mocking a WCF Service endpoint (IWhatever). WCF automatically adds Async call back options for methods (e.g. IWhatever.DoWork() is also realized as IWhatever.DoWorkAsync()).
I'm looking to use the Mock<IWhatever> object while self-hosting this service mock'd; basically spoof this external web service to my system. However, when [self-hosted] WCF tries to create a DoWorkAsync() method; it already exists... which ultimately throws errors when opening the self-hosted/mock'd IWhatever endpoint. ((NOTE: I don't have access to original contract to use directly)).
Sooo.. looks like Castle DynamicProxy allows for one to define which methods should be generated (see: http://kozmic.net/2009/01/17/castle-dynamic-proxy-tutorial-part-iii-selecting-which-methods-to/). I was thinking I would use to not intercept calls on methods ending with "[...]Async". However I don't see where I would add this customization rule in the proxy generation within MOQ; hence my question.

Why do Meteor methods go in the models.js file?

According to this video, meteor methods should be defined in the models.js file that is available on the client and the server.
If methods are supposed to be secure procedures that the client invokes on the server, why are they defined in the models.js file? Clients call methods with Meteor.call, so doesn't it make sense to define our methods on the server, not in models.js?
You don't have to put methods in a "model.js" file, you can put them anywhere, they just happened to name the file model.js in the video.
Meteor.methods is an "Anywhere" method, which means that it can exist on both the server and the client. If you look at the docs, you'll see the difference explained:
Calling methods on the server defines functions that can be called remotely by clients.
[...]
Calling methods on the client defines stub functions associated with server methods of the same name.
In the video they're showing you a demo of how methods and other features of Meteor work, so they weren't concerned with specifically placing the methods in the server.
The video you posted is merely a teaser of what Meteor can do. It's not a tutorial. The documentation explains how methods do work. For clients the method will only be stubbed.
If you make the method available at the server only, the method won't be stubbed. You should also read the concepts of Meteor.

In flex how do I pass data retrieved from a remote object service to a modules interface?

I found at this Adobe tutorial a nice "RemoteService" class that creates a RemoteObject and contains the functions for handling the result and fault events. If I wanted to use this approach, how could I pass the data from the result handler to interfaces that modules from the main application could use?
I could put the RemoteService/RemoteObject in the modules, but (in my opinion- and I could be wrong) the best design seems to be using the remote calls in the main app and passing the data along to the modules.
I think you're correct -- have the remote calls in the main app if other parts of the app will need the data.
To get data into the module, just set a property of the module to the data. So a result handler in the main app sets myModule.someObject = event.result.someObject.
To get data from the module back to the app, dispatch an event. This way the module is loosely coupled to whoever its host is.

Resources