Some tests rely on some external services (e.g. APIs). Sometimes, these external services will go down. This can cause tests to fail and (worse) continuous integration tools to fail.
Is there a way to instruct testthat tests and regular package examples re-run examples/tests more than once, ideally with the second attempt being 5 minutes after the first?
Ideally you would write your tests in a way that they don't call API or database.
Instead you will mock API end points according to the specification and also write test for cases where API returns unexpected results or errors.
Here is an example of package that allows you to do so:
https://github.com/nealrichardson/httptest
If you are worried that your vendor might change API, talk to them and extract details on their API change management.
Ask them this:
What is your change management process?
How do you avoid introducing break changes to existing endpoints that people are using?
(modified from this post)
If you have to check that API is still the same, draw the line between API validation and testing of your code.
You will need two separate processes:
Unit / Acceptance tests that are executed against the mocks of the API end points. Those run fast and are focused on the logic of your application.
pipeline for regular validation of the API. If your code is already live, you are likely to find out of any breaking changes in API anyway. So this is highly redundant. In exceptional cases this can be useful, but only with a very bad vendor.
Related
I want to automate testing of Rest API built using DRF. This automation should run the test cases every 2 minutes and this has to run continuously not in local machine(API is deployed in AWS). If there is any test cases failure then it has to record that failure in log report. This can be any service type. I am using Postman now to run test cases since it is free plan I am using, so have limited API calls and monitors in postman don't have minute wise running.
please help!!
How can I do this?
Yes,
After some research and help of my mentors I found a way to automate API testing.
Coding part:
I have a python script which uses requests package to call the API and then I am using some conditions to ensure the response is behaving according to the requirements.
few example test cases are response status code, response time and schema test.
Automation: I am using AWS lambda function and AWS cloud event bridge to automate and schedule the execution of this script on required time intervals.
Incase of any exceptions, errors we can send alerts to slack channel or Microsoft teams
We're implementing Pact framework for testing couple of microservices (Scala Backend & iOS-Android Frontend). So to test the Pact implementation itself, what sort of negative tests / defect seeding we can do to make sure that - the implemented Pact is catching what it supposed to catch?
i.e. once our Pact work’s complete we’re thinking to do defect seeding & see if errors like Query parameters, API response structure change , spelling changes in API paths etc are getting caught correctly by the PACT or not?
So on that line what other -ve tests & defect seeding, could we run on the implemented PACT framework? viz- Checklist for smoke test / exploratory tests to quickly test implemented PACT framework before its launched to live.
Thanks
Pact is not really designed for those types of tests. It can be done, but the extra variations in data often becomes an issue for providers [1].
There is a proposal to be able to "annotate" interactions by adding arbitrary labels for purposes such as this. If you'd like to add your thoughts to https://github.com/pact-foundation/pact-specification/issues/75 that would be helpful.
https://docs.pact.io/consumer#use-pact-for-isolated-unit-tests
I am using pact-jvm provider spring. I have two different pact(.json) files lets say (order.json and irs.json), i need to run them in sequentially (order followed by irs), but based on alphabetical order the test classes are picked, the irs run first and order runs second. Is there way to call execute the particular test class provider states or define the test class execution order?
Pact is not a tool for end-to-end testing, in fact, one of Pact's stated objectives is to reduce or in some cases, completely remove the need for E2E testing.
Instead of doing end-to-end testing, we use contract tests to avoid the need to do that. Doing this has a lot of benefits, including the ability to test and release things separately, avoiding the need for managing test environments and data, and reducing coupling/ordering in tests themselves. Furthermore, it should be able to run on your laptop or on a CI build - you don't need to test against a running provider deployed to a real environment.
If you have to run a set of these tests in a particular sequence, you're doing it wrong
Here are some links to help you understand what I mean a bit better:
https://docs.pact.io/consumer/contract_tests_not_functional_tests
https://docs.pact.io/faq/#do-i-still-need-end-to-end-tests
https://docs.pact.io/getting_started/what_is_pact_good_for
I would also recommend completing one of our workshops, probably https://github.com/DiUS/pact-workshop-jvm.
It takes about 1 hour, but is well worth your time as all of the core concepts are clearly explained.
The Symfony testing documentation doesn't really mention a distinction between functional tests and integration tests, but my understanding is that they are different.
The Symfony docs describe functional testing like this:
Make a request;
Test the response;
Click on a link or submit a form;
Test the response;
Rinse and repeat.
While the Ruby on Rails docs describe it like this:
was the web request successful?
was the user redirected to the right page?
was the user successfully authenticated?
was the correct object stored in the response template?
was the appropriate message displayed to the user in the view?
The Symfony docs seem to be describing something more akin to integration testing. Clicking links, filling out forms, submitting them, etc. You're testing that all these different components are interacting properly. In their example test case, they basically test all actions of a controller by traversing the web pages.
I am confused why Symfony does not make a distinction between functional and integration testing. Does anyone in the Symfony community isolate tests to specific controller actions? Am I overthinking things?
Unit testing refers to test methods of a class, one by one, and check that they make the right calls in the right context. If those methods use dependencies (injected services or even other methods of that class), we're mocking them to isolate the test to the current method only.
Integration testing refers to automatically test a feature of your application. This is checking that all possible usage scenarios for the given feature work as expected. To do such tests, you're basically using the crawler, and simulate a website user to work through the feature, and check the resulting page or even resulting database data are consistent.
Functional testing refers to manually challenge the application usability, in a preproduction environment. You have a Quality Assurance team that will roll out some scenarios to check if your website works as expected. Manual testing will give you feedbacks you can't have automatically, such as "this button is ugly", "this feature is too complex to use", or whatever other subjective feedback a human (that will generally think like a customer) can give.
The way I see it the 2 lists don't contradict eachother. The first list (Symfony) can be seen as a method to provide answers for the second list (Rails).
Both lists sound like functional testing to me. They use the application as a whole to determine if the application satisfies the requirements. The second list (Rails) describes typical questions to determine if requirements are met, the first list (Symfony) offers a method on how to answer those questions.
Integration tests are more focused on how units work together. Say you have a repository unit that depends on a database abstraction layer. A unit test can make sure the repository itself functions correctly by stubbing/mocking out the database abstraction layer. An integration test will use both units to see if they actually work together as they should.
Another example of integration testing is to use real database to check if the correct tables/columns exist, and if queries deliver the results you expect. But also that when a logger is called to store a message, that message really ends up in a file.
PS: Functional testing that actually uses a (headless) browser is often called acceptance testing.
Both referenced docs describes functional tests. Functional tests are performed from user perspective (typically on GUI layer). It tests what user will see, what will happen if user submit form or will click on some button. Does not matter if it is automatic or manual process.
Then there are integration and unit tests. These test are on lower level. Basic prediction for unit tests is that they are isolated. You can test particular object and its methods but without external or real dependecies. This is what are mocks for (basically mock simulates real object according to unit test needs). Without understanding of IOC is really hard write isolated tests.
If you are writing tests using real/external dependencies (no mocks), you write integration tests. Integration tests can test cooperation of two objects or whole package/module including querying database, sending mails etc.
Yes, you are overthinking things ;)
I don't know why Symfony or Ruby on Rails states things like that. There is a time where testing depends on the eye it's looking at it. Bottom line: it doesn't matter the name. The only important thing is the confidence that the test gives to you on what you are doing.
Apart from that, the tests are alive and should evolve with your code. I sometimes test only for a specific HTTP status code, other times I isolate a module and unit test it... depends on the time I have to spend, the benefits, etc.
If I have a piece of code that only is used in a controller, I usually go for a functional test. If I'm making an utility I usually go for unit testing.
I have an orchestration which polls data from a database (which is actually used by an ERP, so i am not able to manipulate data in this database), Once the polling port finds matching data it executes the orchestration and sends data to a third party web service.
The logic used in this orchestration is complicated and often prone to change, and so it's important to cover it with proper set of tests. I am thinking about this for a while and even thought of using 3 different components so that,
First part (can be only 2 ports) reads the data from the database and put into a folder
Second one (current orchestration) uses a file port to read data and dumped by the first component and it dumps the resultant file to another folder
Third component reads the file dumped by the second component and send it to the web service
However I have few concerns,
Is this a frowned upon practice, when it comes to the BizTalk? Or is it a normal way to do things?
The performance - would it be significant slower compared to the current solution?
We are currently using the one of the server to run the tests / do the build using BTDF and Jenkins. Is there a way to disable the components 1 and 3, run the tests and re-enable them once build is completed so that it can function normally?
You can avoid the overhead of writing to and reading from files by using the built-in functionality of the MessageBox. The first place to start is here: https://msdn.microsoft.com/en-us/library/aa949234.aspx
There is an excellent Biztalk sample which shows how you can use this approach to modularise your functionality into a set of orchestrations which independently read from and write to the MessageBox. It's referenced at the bottom of the previous page and is called "Direct Binding to the MessageBox Database in Orchestrations".
I'd recommend against this approach. You'd be better off making the three orchestrations direct bound to the MessageBox and subscribe to the messages published by the previous orchestration. You could also create send ports that subscribe to these messages, or just use the management console to debug the messages.
You can also write unit tests for your various tasks. If you're doing some work in a .NET helper library, you can have a plain old unit tests project. You might also want to look into the BizUnit framework (https://bizunit.codeplex.com/) - it takes a little doing to get used to but it's a great resource for writing BizTalk unit tests.