How to combine Automation Tests that use different runners and platforms? - automated-tests

Is there a way (tools or solutions) to combine different suites for different technologies without writing your own test runner?
I already have tests for different components of the system (Android, Web, back-end) but now I need to combine them into a single suite. Test suites must run in a specific order (e.g Android test send data than Web test validate is data displayed correctly), so it would be nice to have a possibility to write config like that:
const superMegaSuite = [
{ type: 'TestNG', suite: 'SendData' },
{ type: 'Karma', suite: 'Check My Data' },
];
Technology that used for testing and need to be "combined":
Karma + Jasmine + Protractor (for web)
TestNG + Appium (for Android)
P.S. I understand that technically task could be resolved with writing some custom runner that will be an abstraction over existing runners. However, I want to avoid writing my own implementation if there already exists some solutions.

You may try Outthentic. I am not sure if I understand the specific of you project perfect, but you may go like this:
$ cat hook.bash
run_story SendData
run_story CheckMyData
$ cat modules/SendData/story.bash
echo run send data suite
$ cat modules/CheckMyData/story.bash
echo run check my data suite
So you can organize different types of tests into stories and run them in order:
$ strun
2018-08-14 18:31:47 : [path] modules/SendData/
run send data suite
ok scenario succeeded
2018-08-14 18:31:47 : [path] modules/CheckMyData/
run check my data suite
ok scenario succeeded
STATUS SUCCEED

Related

Mock real gRPC server responses

We have a microservice that needs to be integration tested (real calls, but no network communication with anything outside of the test namespace in kubernetes) in our pipeline. It also relies on an external gRPC server which we have no control over.
Above is a picture of what we'd like to have happen. The white box on the left is code that provides the Microservice Boundary with 'external' data. It then keeps calling the Code via REST until it gets back the proper number of records or it times out. The Code pulls records from an internal database, as well as data associated to those records from a gRPC call. Since we do not own the gRPC service, but are doing integration tests, we need a few pre-defined responses to the two gRPC services we call (blue box).
Since our integration tests are self-contained right now, and we don't want to write an entirely new actual gRPC server implementation just to mimick calls, is there a way to stand up a real gRPC server and configure it to return responses? The request is pretty much like a mock setup, except with an actual server.
We need to be able to:
give the server multiple proto files to interpret and have it expose those as endpoints. Proto files must be able to have different package names
using files we can store in source control, configure the responses to each call
able to run in a linux docker container (no windows)
I did find gripmock which seemed almost exactly what we need, but it only serves one proto file per container. It supposedly can serve more than one, but I can't get it to work and their example that serves two files implies each proto file must have the same package name which will likely never happen with our scenarios. In the meantime we are using it, but if we have 10 gRPC call dependencies, we now have to run 10 gripmock servers.
Wikipedia contains a list of API mocking tools. Looking at that list today there is a commercial tool that supports gRPC called Traffic Parrot which allows you to create gRPC mocks based on your Proto files. You can give it multiple proto files, store the mocks in Git and run the tool in Docker.
There are also open-source tools like GripMock but it does not generate stubs based on Proto files, you have to create them manually. Also, the project up to today was not keeping up to date with Proto and gRPC developments i.e. the package name issue you have discovered yourself above (works only if the package names in different proto files are the same). There are a few other open-source tools like grpc-wiremock, grpc-mock or bloomrpc-mock but they still lack widespread adoption and hence might be risky to adopt for an important enterprise project.
Keep in mind, the mock generated will be only a test double, it will not replicate the full behaviour of the system the Proto file corresponds to. If you wanted to also replicate partially the semantics of the messages consider doing a recording of the gRPC messages to create the mocks, that way you can see the sample data as well.
Take a look at this JS library which hopefully does what you need:
https://github.com/alenon/grpc-mock-server
Usage example:
private static readonly PROTO_PATH: string = __dirname + "example.proto";
private static readonly PKG_NAME: string = "com.alenon.example";
private static readonly SERVICE_NAME: string = "ExampleService";
...
const implementations = {
ex1: (call: any, callback: any) => {
const response: any =
new this.proto.ExampleResponse.constructor({msg: "the response message"});
callback(null, response);
},
};
this.server.addService(PROTO_PATH, PKG_NAME, SERVICE_NAME, implementations);
this.server.start();

Approach to securing test data for public repositories

We have setup nightly testing for an open source project (MERN stack). The Selenium tests require test data which we do not want to not make public. Initially we tried to keep test data as environment variables in the build server (CircleCI) but this approach is not scalable. We do not own any infrastructure - so any database or storage bucket based solutions will need additional cost which will not be feasible based on the org's current budget.Is there a smart solution to keep the test data files secure at no additional cost?
As you know, the challenge is that you need somewhere to put that data. If you're trying to do this without paying any providers, the best I can suggest is Amazon's free tier for either S3 storage or a database. https://aws.amazon.com/free/
Those could be securely accessed from CircleCI by just storing the API keys as project variables.
CircleCI's AWS S3 orb encapsulates the install and setup of AWS CLI to simplify this.
version: 2.1
orbs:
aws-s3: circleci/aws-s3#1.0.2
jobs:
build:
docker:
- image: 'circleci/node:10'
steps:
- checkout
- aws-s3/copy:
from: 's3://your-s3-bucket-name/test_data/somefile.ext'
to: test_data.ext
- run: # your test code here

Robot Framework : Is there any way using which we can execute test cases (same test cases/test suit) on multiple andriod devices at once?

enter image description hereOnce the app is launched on multiple android devices. How can we execute same test cases on multiple mobile devices at once? As I am having a bunch of test cases to execute on multiple android devices. I am trying to use For Loop by passing udid OR Appium Server name in Test cases,but it's not working.It executes test case on a single device only.Is there any way using which we can execute test cases (same test cases/test suit) on multiple Android devices at once?
You could use https://github.com/mkorpela/pabot with --argumentfile[NUMBER] options.
What you basically want to do is Parallel execution of your tests. There are many tools available to achieve this with ease and much coverage (vast number of devices and flavors) like SeeTest Cloud, Xamarin Test Cloud, AWS Device Farm, Perfecto etc
However if you want to achieve using Appium and TestNG it is still possible. Below are high level steps:
Launch multiple instances of Appium server, by using different
address, port, callbackPort, and BootstrapPort as part of node
command.
Get the UUID of the devices and pass it in TestNG xml
Run the xml as suite.
Below is the link with exact commands and steps:
http://toolsqa.com/mobile-automation/appium/appium-parallel-execution-using-testng/
You can use something like following as solution to your problem. As I had said earlier in my answer, you can save the drivers in a dictionary &{drivers} and use it in your loop to do repetitive actions on all your devices.
*** Settings ***
Library AppiumLibrary
Library Collections
Library Process
*** Variables ***
${APPIUM_SERVER1} http://127.0.0.1:4723/wd/hub
${APPIUM_SERVER2} http://127.0.0.1:4750/wd/hub
${udid_device1} udid of device 1
${udid_device2} udid of device 2
*** Keywords ***
setup and open android phone A
&{drivers}= Create Dictionary
${androiddriver1}= Open Application ${APPIUM_SERVER1} platformName=android platformVersion=7.0 deviceName=android udid=${udid_device1} automationName=uiautomator2
... appPackage=com.android.contacts newCommandTimeout=2500 appActivity=com.android.contacts.activities.PeopleActivity
Set To Dictionary ${drivers} ${udid_device1}=${androiddriver1}
Set suite variable ${drivers}
setup and open android phone B
${androiddriver2}= Open Application ${APPIUM_SERVER2} platformName=android platformVersion=7.0 deviceName=android udid=${udid_device2} automationName=uiautomator2
... appPackage=com.htc.contacts newCommandTimeout=2500 noReset=True appActivity=com.htc.contacts.BrowseLayerCarouselActivity
Set To Dictionary ${drivers} ${udid_device2}=${androiddriver2}
Set suite variable ${drivers}
Log Dictionary ${drivers}
Open URL
:FOR ${key} IN #{drivers.keys()}
\ ${value}= Get From Dictionary ${drivers} ${key}
\ Log ${key}, ${value}
\ repetitive actions here
you can save sessions from open application in a dictionary and use them in a loop to do some actions on every phone.
Please edit your question with code for further help.

which s best way to test the database packages?

I am currently working on a project where we need to test the database packages and functions.
We need to provide the input parameters to the database package and test the packages returns the expected value, also we want to test the response time of the request.
Please advice, if there is any tool available to perform this or we can write our test cases in Junit or some other framework.
Which one will be best approach?
I've used a more native approach when I had to do DWH testing. I've arranged the Test framework around the Dev data integration framework that was already in place. So i had a lot of reusable jobs, configurations and code. But using OOP like you suggest
write our test cases in Junit
is a way to go too. But keep in mind that very often the DWH design is very complex (with a lot of aspects to consider) and interacting with the Persistence layer is not always the best candidate for testing strategy. A more DB oriented solution (like tSQLt) offers a significant performance.
Those resources helped me a lot:
dwh_testing
data-warehouse-testing
what-is-a-data-warehouse-and-how-do-i-test-it
My framework Acolyte provides a JDBC driver & tools, designed for such purposes (mock up, testing, ...): http://tour.acolyte.eu.org
It's used already in some open source projects (Anorm, Youtube Vitess, ...), either in vanilla Java, or using its Scala DSL.
handler = handleStatement.withQueryDetection(...).
withQueryHandler(/* which result for which query */).
withUpdateHandler(/* which result for which update */).
// Register prepared handler with expected ID 'my-unique-id'
acolyte.Driver.register("my-unique-id", handler);
// then ...
Connection con = DriverManager.getConnection(jdbcUrl);
// ... Connection |con| is managed through |handler|

Running code only for tests using Jasmine package

I am using the sanjo:jasmine and velocity:html-reporter packages in my app to try and implement some unit and integration testing. Using this tutorial as a guide, I have a few unit tests and a couple integration tests done. What I am not able to figure out is how to get code to run in the "test" environment that is not part of a unit test or integration test, but needs to run prior to the tests and only for the tests.
What I am trying to solve is that I need some dummy users created for testing, but I do not want them in my production app. Sort of like an "init" phase where you can build the mockups and insert any data you need. Is there a way to accomplish this?
I would recommend that you create some seed or fake data for your tests using factories.
I would recommend that you try the following packages:
anti:fake - Fake text and data generator for Meteor.js
dburles:factory - A package for creating test data or for generating fixtures.
You can install these packages using this command:
meteor add anti:fake dburles:factory
Create your factory data for the test environment only.
I'd create a file called server/seeds.js with the following content:
Meteor.startup(function() {
Factory.define('user', Users, {
username: "test-user",
name: "Test user",
email: "test#example.com"
// add any other fields you need
});
var numberOfUsers = 10;
// Ensure this is the test environment
if (process.env.NODE_ENV === 'test') {
// Create the users from the factory definition
_(numberOfUsers).times(function(n) {
Factory.create('user');
});
}
});
You can follow this Factory approach for any data, not just Users.
If your Users need to login, such as when you're using accounts:base, then I would consider an alternative approach to using Factory data:
var email = "test#example.com";
var password = "secret";
var name = "Test user";
Accounts.createUser({email: email, password: password, profile: {name: name}});
Please see Accounts.createUser in the Meteor docs for more details.
If you're using sanjo:jasmine you can insert data into the mirrored db before writing your specs (after describe and before it clauses) and this data would be available for all specs.
Also, you may use beforeEach() in order to provide data for each specs, and then you can delete it using afterEach().
Here you can find more info.
I've been using mike:mocha and as long as your specs are written inside a folder called tests (and then client / server, respectively) then Velocity puts data in velocity specific collections. I run the same Meteor method I use to insert a document in my main app, but velocity knows to put it in the mirrored version.

Resources