Robot framework for distributed testing - robotframework

We are in the process of finalizing a test framework, and are pretty impressed with Robot framework and STAF.
Unable to decide with the optimal approach for the below:
Want to be able to start tests from a server by selecting clients
Can we display all clients in the existing network?
Number of clients can increase/decrease over time
When we select a client we want to go fetch client properties dynamically
Is there a way to display dynamic properties on RIDE/STAX
Can we use any other framework and integrate with Robot? Staf/Stax?
User should be able to choose tests the client supports and build a config
Can we use RIDE or something similar to build test configurations/per client
Launch all clients in parallel, monitor and report results
Is there a way to launch and monitor results in parallel?

This is only a partial answer, based on my understanding of your question and lacking full knowledge of your situation.
Robot Framework can be started from a .cmd or .bat file. I don't know if the results can be sent somewhere else to be saved from there, but I'm fairly certain that yes, they can.
1.1. If you can get that list in Python or Java, then yes, you can do it in Robot Framework and pass variables and pass/fail results around the test suite. You might be able to use both at once, but I'm not sure yet.
Handled with the first item.
Robot + Python/Java can probably handle that.
3.1. I don't know, I use PyCharm as my Robot Framework IDE. It has an integrated console and allows for quick and easy managing of Python/RobotFramework files, as well as a lot of other languages, but I'd imagine that using Robot's Log to Console keyword, you could send the results directly to the console. So, yes.
3.2. Short answer: Not that I've ever heard of, but if you can run those with Java/Python and return the Pass/Fail results to Robot Framework, then yes.
Using multiple tags, the Robot Framework programmer can, at runtime, either run the test excluding particular tags or run the test including particular tags.
In theory: yes. Again, not something I've ever done, and honestly telling you how is beyond me, but I don't know a reason why you couldn't as long as you don't have any custom keywords that move the mouse cursor.

Related

Karate UI testing: how to prevent browser closing at the end of Scenario

I am working on UI automated tests using Karate framework. I am really enjoyng working with this great tool, but it's something that I'm trying to resolve for a while and can't find any solution.
I have a feature file with 3 scenarios and I want to open the browser and to make login only once, before all scenarios, and to be closed only after the last scenario is finished.
In my case driver is started at separated login.feature file which is called from Background using 'callonce read('login.feature')' command. I've seen somewhere that if driver is started before scenario. In my case it's not working. What do I do wrong?
You can use this answer as a reference: https://stackoverflow.com/a/60581024/143475
So it is supposed to work if you create a feature and then call the other scenarios from that feature. Karate is designed to close the driver after a Scenario by default.
I also recommend that when you have a flow, don't try to split it into different Scenarios. Or you should be prepared to call different features from one Scenario.

Creating weblogic domain for SOA and extending it silently

To start with, I have to say that I am not an oracle developer and I have never used any oracle product. But I have a task in hand to automate Fusion Middleware. I have automated the installation using response files. However, I Am not sure how to create the soa domain and extend it to include soa components.
I am using this document http://docs.oracle.com/cd/E23943_01/core.1111/e12036/toc.htm
(steps 8,9 and 10). MAnually, using the GUI i am able to do it but I need to automate it. I know it can be done via wlst scripts. Does anyone have a sample script as per the documentation?Unfortunately, I dont have much time to understand wlst scripting. Any help would be appreciated.

SpecFlow, Webdriver and Mocks - is it possible?

The question in short is that we are stumbling upon BDD definitions that more or less require different states - which leads to the necessity for a mock of sorts for ASP.NET/MVC - I know of none, which is why I ask here
Details:
We are developing a project in ASP.NET (MVC3/Razor engine) and are using SpecFlow to drive our development.
We quite often stumble into situations where we need the webpage under test to perform in a certain manner so that we can verify the behavior, i.e:
Scenario: Should render alternatively when backend system is down
Given that the backend system is down
And there are no channels for the page to display
When I inspect the webpage under test
Then the page renderes an alternative html indicating that there is a problem
For a unit test, this is less of an issue - run mock on the controller bit, and verify that it delivers the correct results, however, for a SpecFlow test, this is more or less requiring alternate configurations.
So it is possible at all, or - are there some known software patterns for developing webpages using BDD that I've missed?
Even when using SpecFlow, you can still use a mocking framework. What I would do is use the [BeforeScenario] attribute to set up the mocks for the test e.g.
[BeforeScenario]
public void BeforeShouldRenderAlternatively()
{
// Do mock setups.
}
This SO question might come in handy for you also.
You could use Deleporter
Deleporter is a little .NET library that teleports arbitrary delegates into an ASP.NET application in some other process (e.g., hosted in IIS) and runs them there.
It lets you delve into a remote ASP.NET application’s internals without any special cooperation from the remote app, and then you can do any of the following:
Cross-process mocking, by combining it with any mocking tool. For example, you could inject a temporary mock database or simulate the passing of time (e.g., if your integration tests want to specify what happens after 30 days or whatever)
Test different configurations, by writing to static properties in the remote ASP.NET appdomain or using the ConfigurationManager API to edit its entries.
Run teardown or cleanup logic such as flushing caches. For example, recently I needed to restore a SQL database to a known state after each test in the suite. The trouble was that ASP.NET connection pool was still holding open connections on the old database, causing connection errors. I resolved this easily by using Deleporter to issue a SqlConnection.ClearAllPools() command in the remote appdomain – the ASP.NET app under test didn’t need to know anything about it.

Simple web load testing app with GUI

I am looking for a simple web load-testing tool that has a GUI.
I need to run lots of small and simple tests (like hit page x 100 time and let me know how long it took).
I do not want to have to script every test as I would have to using WCAT or AB.
Also free would be nice.
If it matters I am using IIS7.
There are a number of services online that can do this type of testing for you as well. Of course, one of the downsides to this approach is that its harder to correlate the data from the service (which is what can be observed externally) with your own internal data about disk I/O, DB ops, etc. If you end up going this route I would suggest finding a vendor that will give you programmatic access to the raw test result data.
I found JMeter.
It is very advanced (way more than I need) but it has a nice GUI and has very good docs.
http://jmeter.apache.org/

WatiN test data reset/clean up

I'm wondering how people are currently resetting their data / cleaning up test remnants for their WatiN/Wartir tests?
For example, lets say there's a test to add a user into the system and the username has to be unique. Obviously the first run without any users should work fine, but the second run will fail without manual intervention.
There are a couple of strategies that you could do for this, I am assuming that you are using WatiN, with Nunit or VS Unit tests to run your tests.
Use transactions
An approach that is used when unit testing is that you "wrap" the whole test in a transaction and at the completion of the test roll the transaction back. In .net you can use System.Transactions for this.
Build a "stub page"
Build a page in your applicaiton that uses the existing business logic to delete your data. This page would need to be secured and ideally not even deployed in to production.
This is the approach that I would recommend.
Call a web service
Develop a web service, or call one directly from the app tier of the applicaiton to perform the delete. You will probably need to develop this as well.
Clean up directly
Build some classes in your test code to access the data and clean it up.
With any of these you will need to cleanup before and after you run your test, i.e. in the test setup and test cleanup methods. The reason to do it twice is that you should assume that your test has failed and not cleaned up properley.
Use Linq to Sql AFAIK if you are using Linq to sql, it works in-memory and wraps the whole update in a transaction for you automatically. If you simply don't call the SubmitChanges(); method then you should be fine, but I haven't tested this myself.
I have asked a developer to make a script that will reset database. After a suite of tests, I just call that script and start from clean database.
Mike - your question isn't unique for Watir/WatiN. It applies for any UI testing, so search around for similar solutions for Selenium, Windmill, and even headless integration tests (HtmlUnit, API tests, etc). I've answered this question a couple times personally on StackOverflow.
WatiN is for UI testing.
In order to test the scenario you are looking for, you can generate user id using the c# code that will make it unique (as against the way it is stored when you created the test).

Resources