we are using pact for contract testing. For an api that is being used in two different context, would respond with different values in the response (though the fields and format of the responses are same.) Should we cover this as two different interaction ? if yes, is it not like testing the business functionality?
This will have implication in the following case too...
we are using contract test for backward compatibility validation of our mobile app. If there is a change in api, where only the value of the response have changed. And if the change work for current version of the consumer, but not with the older version. This is will break the backward compatibility of the provider with the older version of mobile app. How will we be able to catch this, if we only consider the structure of the response as part of the contract tests?
It's a good question and there is no straightforward answer for it in all cases.
The first thing I'd say on this, is that structure alone is not a contract test - it's more a schema test - and schemas are not contracts. Contract tests do care about values in many cases.
Start by working with the following operative:
The rule of thumb for working out what to test or not test is - if I don't include this scenario, what bug in the consumer or what misunderstanding about how the provider responds might be missed. If the answer is none, don't include it.
In your case the value of role has a specific enum-like meaning. I assume that the consumer code looks for this value, and might conditionally do something different based on its value. For example, if it were a UI it might display different options alongside the page, or fetch other relevant data.
If that's the case, then you would potentially want to include it as a contract test. If the UI doesn't care about the value of the field other than to display it, then it wouldn't be worth including in a contract test
if yes, is it not like testing the business functionality?
Well, not exactly. Contract tests are not functional tests, but like many types of testing there is overlap. Your aim here is to ensure the various important conversations that are are expected to occur between a consumer and a provider are covered. Functional testing is a separate activity that can primarily be distinguished by its concern for side effects.
Related
There seems to be plenty of information on explicit contracts&states upgrades, but it seems there is lack of info about implicit contract and state upgrades.
Assume that I use signature policy for contracts. How do I migrate old states to new ones if I want to use old ones also?
UPDATE:
I have found those samples and as I understand there is no states upgrade process at all! On the contrary, all flows/states and contracts are created in backward compatible way. But intuitively, if I have 50 releases for example, does it mean that related piece of the code will contain 50 if/else for all possible old versions of the flow? Won't the code become a mess? Is there any way of somehow normalizing the states?
I think you are correct. As long as the old versions of data (i.e. Corda states) exist in the network, you will need to keep this conditional logic in your contract code, so that it's capable of handling states of the older format.
What you could do to mitigate this proliferation of conditional logic is:
identify all the states of the older format. If there are any, migrate them to the new format, by spending them in a transaction and re-creating them with the new format. If there aren't any, move to the next step.
perform another implicit upgrade of your contract code that does not have any functional changes besides removing the conditional logic that is not needed anymore.
Following this steps, you can gradually remove conditional logic that's not needed, simplifying the contract code gradually. But, you're essentially back to a form of explicit upgrade, which might not be very practical depending on the number of parties and states in your network.
I see Rule Flow which supports action so it may be possible to build some types of workflow on top of this. In my situation I have an case management application with tasks for different roles, all working on a "document" that flows through different states and depending on state, different role will see in their queue to work on.
I'm not sure what your question is, but InRule comes with direct support for Windows Workflow Foundation, so executing any InRule RuleApplication, including those with RuleFlow definitions, is certainly possible.
If you'd like assistance setting up this integration, I would suggest utilizing the support knowledge base and forums at http://support.inrule.com
Full disclosure: I am an InRule Technology employee.
For case management scenarios, you can use decisions specifically to model a process. Create a custom table or flags in your cases that depict the transition points in your process (steps). As you transition steps, call a decision which will determine if the data state is good enough to make the transition. If it is, then set the flag for the new state. Some folks allow for multiple states at the same time. InRule is a stateless platform; however, when used with CRM it provides 95% of the process logic and relies on CRM to do the persistence. I have written about this pattern in a white paper:
https://info.inrule.com/rs/inruletechnology/images/White_Paper_InRule_Salesforce_Integration.pdf
I know it's possible to use validators to check data input in the presentation layer of an application (e.g. regex, required fields etc.), and to show a message and/or required marker icon. Data validation generally belongs in the business layer. How do I avoid having to maintain two sets of validations on data I am collecting?
EDIT: I know that presentation validation is good, and that it informs the user, and that it's not infallible. The fact remains, does it not, that I am checking effectively the same thing in two places ?
Yes, and no.
It depends on the architecture of your application. We'll assume that you're building an n-tier application, since the vast majority of applications these days tend to follow that model.
Validation in the user interface is designed to provide immediate feedback to the end-user of the system to prevent functionality in the lower tiers from ever executing in the first place with invalid inputs. For example, you wouldn't even want to try to contact the Active Directory server without both a user name and a password to attempt authentication. Validation at this point saves you processing time involved in instantiating an object, setting it up, and making an unnecessary round trip to the server to learn something that you could easily tell through simple data inspection.
Validation in your class libraries is another story. Here, you're validating business rules. While it can be argued that validation in the user interface and validation in the class libraries are the same, I would tend to disagree. Business rule validation tends to be far more complex. Your rules in this case may be more nuanced, and may detect things that cannot be gleaned through the user interface. For example, you may enforce a rule that states that the user may execute a method only after all of a class's properties have been properly initialized, and only if the user is a member of a specific user group. Or, you may specify that an object may be modified only if it has not been modified within the last twenty-four hours. Or, you may simply specify that a string value cannot be null or empty.
In my mind, however, properly designed software uses a common mechanism to enforce DRY (if possible) from both the UI and the class library. In most cases, it is possible. (In many cases, the code is so trivial, it's not worth it.)
I don't think client-side (presentation layer) validation is actual, useful validation; rather, it simply notifies the user of any errors the server-side (business layer) validation will find. I think of it as a user interface component rather than an actual validation utility, and as such, I don't think having both violates DRY.
EDIT: Yes, you are doing the same action, but for entirely different reasons. If your only goal is strict adherence to DRY, then you do not want to do both. However, by doing both, while you may be performing the same action, the results of that action are used for different purposes (actually validating the information vs. notifying the user of a problem) , and therefore, performing the same action twice actually results in useful information each time.
I think having good validations at application layer allows multiple benefits.
1. Facilitates unit testing
2. You can add multiple clients without worrying about data consistency.
UI validation can be used as tool to provide quick response times to the end users.
Each validation layer serves a different purpose. The user interface validation is used to discard the bad input. The business logic validation is used to perform the validation based on business rules.
For UI validation you can use RequiredFieldValidators and other validators available in the ASP.NET framework. For business validation you can create a validation engine that validates the object. This can be accomplished by using the custom attributes.
Here is an article which explains how to create a validation framework using custom attributes:
http://highoncoding.com/Articles/424_Creating_a_Domain_Object_Validation_Framework.aspx
Following up on a comment from Fredrik Mörk as an answer, because I don't think the other answers are quite right, and it's important for the question.
At least in a context where the presentation validation can be bypassed, the presentation validations and business validations are doing completely different things.
The business validations protect the application. The presentation validations protect the time of the user, and that's all. They're just another tool to assist the user in producing valid inputs, assuming that the user is acting in good faith. Presentation validations should not be used to protect the business validations from having to do extra work because they can't be relied upon, so you're really just wasting effort if you try to do that.
Because of this, your business validations and presentation validations can look extremely different. For business validations, depending on the complexity of your application / scope of what you're validating at any given time, it may well be reasonable to expect them to cover all cases, and guarantee that invalid input is impossible.
But presentation validations are a moving target, because user experience is a moving target. You can almost always improve user experience beyond what you already have, so it's a question of diminishing returns and how much effort you want to invest.
So in answer to your question, if you want good presentation validation, you may well end up duplicating certain aspects of business logic - and you may well end up doing more than that. But you are not doing the same thing twice. You've done two things - protected your application from bad-faith actors, and provided assistance to good-faith actors to use your system more easily. In contexts where the presentation layer cannot be relied upon, there is no way to reduce this down so that you only perform a task like "only a number please" once.
It's a matter of perspective. You can think of this as "I'm checking that the input is a number twice", or you can think "I've guaranteed that I'm not getting a number, and I've made sure the user realises as early as possible that they're only supposed to enter a number". That's two things, not one, and I'd strongly recommend that mental approach. It'll help keep the purpose of your two different kinds of validations in mind, which should make your validations better.
I am working on some code coverage for my applications. Now, I know that code coverage is an activity linked to the type of tests that you create and the language for which you wish to do the code coverage.
My question is: Is there any possible way to do some generic code coverage? Like in, can we have a set of features/test cases, which can be run (along with a lot more specific tests for the application under test) to get the code coverage for say 10% or more of the code?
More like, if I wish to build a framework for code coverage, what is the best possible way to go about making a generic one? Is it possible to have some functionality automated or generalized?
I'm not sure that generic coverage tools are the holy grail, for a couple of reasons:
Coverage is not a goal, it's an instrument. It tells you which parts of the code are not entirely hit by a test. It does not say anything about how good the tests are.
Generated tests can not guess the semantics of your code. Frameworks that generate tests for you only can deduct meaning from reading your code, which in essence could be wrong, because the whole point of unittesting is to see if the code actually behaves like you intended it too.
Because the automated framework will generate artificial coverage, you can never tell wether a piece of code is tested with a proper unittest, or superficially tested by a framework. I'd rather have untested code show up as uncovered, so I fix that.
What you could do (and I've done ;-) ) is write a generic test for testing Java beans. By reflection, you can test a Java bean against the Sun spec of a Java bean. Assert that equals and hashcode are both implemented (or neither of them), see that the getter actually returns the value you pushed in with the setter, check wether all properties have getters and setters.
You can do the same basic trick for anything that implements "comparable" for instance.
It's easy to do, easy to maintain and forces you to have clean beans. As for the rest of the unittests, I try to focus on getting important parts tested first and thouroughly.
Coverage can give a false sense of security. Common sense can not be automated.
This is usually achieved by combining static code analysis (Coverity, Klockwork or their free analogs) with dynamic analysis by running a tests against instrumented application (profiler + memory checker). Unfortunately, this is hard to automate test algorythms, most tools are kind of "recorders" able to record traffic/keys/signals - depending on domain and replay them (with minimal changes/substitutions like session ID/user/etc)
Yes, I did read the 'Related Questions' in the box above after I typed this =). They still didn't help me as much as I'd like, as I understand what the difference between the two are - I'm just not sure if I need it in my specific case.
So I have a fully unit tested (simple & small) application. I have some 'Job' class with a single public Run() method + ctors which takes in an Excel spreadsheet as parameter, extracts the data, checks the database to see if we already have that data, and if not, makes a request to a third party vendor, takes that response, puts it in the database and then completes the job (db update again)
I have IConnection to talk to vendor, IParser to parse excel/vendor files, IDataAccess to do all database access. My Job class is lean & mean and doesnt do much logic, even though in reality it is doing all of the logic, it's really just 'chaining along' data through to the composite objects...
So all the composite objects are unit tested themselves, including the DAL, and even my Run() method on the Job class is unit tested fully using mocks for all possible code paths..
So - do I need to do any type of integration test at this point, other then run the app to see if it works? Is my test(s) of the Run() method with mocks considered my integration test(s)? Or should my integration test use real instances instead of mocks, and then Assert database values at the end, based on known excel spreadsheet input? But that's what all my unit tests are doing already (just in seperate places, and the mocked Run test makes sure those places 'connect')! Following the DRY methodology, I just don't see a need to do an integration test here...
Am I missing something obvious guys? Many thanks again...
I think the biggest thing you're missing is the actual behaviour of your external systems. While your unit tests may certainly assert that the individual steps perform the expected action, they do little to reveal the run-time issues that may arise when accessing external systems. Your external systems may also contain data you do not know about.
So yes, I think you need both. You do not necessarily need to be equally detailed in both tests. Sometimes you can just let the integration test be a smoke test