Contract testing on dictionary of objects - pact

I'm trying to write contract tests for an object that contains a dictionary of objects. I want to verify the entries respect my contract. The keys are changing between the consumer and provider. Right now, the matching rules of my contract are trying to find specific keys in the body of my message such as "$.properties.desired.deploymentsRemovals['4JgEA5GCeqwVsu6Qada9XS'].appId"
Is it possible to write contract tests in my situation?
I'm using the PactNet nuget version 4.0.0-beta.3.
Using a matcher on the key such as
deployments = new Dictionary<object, object> {
{Match.Type("6XKISmGMWynbwM52mxov6S"),
new {...
produces a contract searching for "pactNet.Matchers.TypeMatcher" as the key
"deployments": {
"pactNet.Matchers.TypeMatcher": {

I'm Yousaf, A developer advocate here at Pact https://pact.io/ and Pactflow - https://pactflow.io/
We have an open forum about contract testing in our Pact Foundation Slack, you can join over at https://slack.pact.io
You may find the pact-net channel of particular interest.
.NET isn't my forte, and I haven't spend much time on StackOverflow in past, I hope to now!
You should be able to use matchers in your pact-net library, they were designed in V2 Pact specification onwards to solve that exact problem
Which particular version and library are you using, there are various implementations, both official and community supported.
There should be examples of their implementation in your respective libraries readme, but let me know if there isn't, and we can look to resolve.
We plan to display these matcher implementations across the various languages very soon

Related

Mock real gRPC server responses

We have a microservice that needs to be integration tested (real calls, but no network communication with anything outside of the test namespace in kubernetes) in our pipeline. It also relies on an external gRPC server which we have no control over.
Above is a picture of what we'd like to have happen. The white box on the left is code that provides the Microservice Boundary with 'external' data. It then keeps calling the Code via REST until it gets back the proper number of records or it times out. The Code pulls records from an internal database, as well as data associated to those records from a gRPC call. Since we do not own the gRPC service, but are doing integration tests, we need a few pre-defined responses to the two gRPC services we call (blue box).
Since our integration tests are self-contained right now, and we don't want to write an entirely new actual gRPC server implementation just to mimick calls, is there a way to stand up a real gRPC server and configure it to return responses? The request is pretty much like a mock setup, except with an actual server.
We need to be able to:
give the server multiple proto files to interpret and have it expose those as endpoints. Proto files must be able to have different package names
using files we can store in source control, configure the responses to each call
able to run in a linux docker container (no windows)
I did find gripmock which seemed almost exactly what we need, but it only serves one proto file per container. It supposedly can serve more than one, but I can't get it to work and their example that serves two files implies each proto file must have the same package name which will likely never happen with our scenarios. In the meantime we are using it, but if we have 10 gRPC call dependencies, we now have to run 10 gripmock servers.
Wikipedia contains a list of API mocking tools. Looking at that list today there is a commercial tool that supports gRPC called Traffic Parrot which allows you to create gRPC mocks based on your Proto files. You can give it multiple proto files, store the mocks in Git and run the tool in Docker.
There are also open-source tools like GripMock but it does not generate stubs based on Proto files, you have to create them manually. Also, the project up to today was not keeping up to date with Proto and gRPC developments i.e. the package name issue you have discovered yourself above (works only if the package names in different proto files are the same). There are a few other open-source tools like grpc-wiremock, grpc-mock or bloomrpc-mock but they still lack widespread adoption and hence might be risky to adopt for an important enterprise project.
Keep in mind, the mock generated will be only a test double, it will not replicate the full behaviour of the system the Proto file corresponds to. If you wanted to also replicate partially the semantics of the messages consider doing a recording of the gRPC messages to create the mocks, that way you can see the sample data as well.
Take a look at this JS library which hopefully does what you need:
https://github.com/alenon/grpc-mock-server
Usage example:
private static readonly PROTO_PATH: string = __dirname + "example.proto";
private static readonly PKG_NAME: string = "com.alenon.example";
private static readonly SERVICE_NAME: string = "ExampleService";
...
const implementations = {
ex1: (call: any, callback: any) => {
const response: any =
new this.proto.ExampleResponse.constructor({msg: "the response message"});
callback(null, response);
},
};
this.server.addService(PROTO_PATH, PKG_NAME, SERVICE_NAME, implementations);
this.server.start();

Crda contracts and states upgrades questions

I am going through this documentation and I have several uncertainties.
Performing explicit contract and state upgrades
Preserve the existing state and contract definitions
Write the new state and contract definitions
Create the new CorDapp JAR
Distribute the new CorDapp JAR
Stop the nodes
Re-run the network bootstrapper (only if you want to whitelist the
new contract)
Restart the nodes
Authorise the upgrade
Perform the upgrade
Migrate the new upgraded state to the Signature Constraint from the
zone constraint
Questions:
1. Preserve the existing state and contract definitions
2. Write the new state and contract definitions
3. Create the new CorDapp JAR
How do I do that? is it meant only to preserve jars with contracts and states on nodes, not preserving them in source code? If I do not preserve them in source code then how can I create the upgrade method?
interface UpgradedContract<in OldState : ContractState, out NewState : ContractState> : Contract {
val legacyContract: ContractClassName
fun upgrade(state: OldState): NewState
}
If I do not preserve old state in source code, then shoud I name the jar differently each time I need to do an upgrade?
Can old jars be reoved from the node when the upgrade was completed?
6. Re-run the network bootstrapper (only if you want to whitelist the
new contract)
8. Authorise the upgrade
Am I right that only those 2 steps are related to Explicit contact upgrades? And If I use implicit flow with signature, then I need to skip only those two steps, while the others are still aplicable and must be performed?
9. Perform the upgrade
Should this be done for each state separately by the owner of the state? In that case should I run it on each node for specific contrcats where the node is the participant of the state? (In doc it is mentioned to be run on single node - but what id=f a single node is not participant of some state)
Other questions
This section describes explicit contracts and states update.
https://docs.corda.net/upgrading-cordapps.html#performing-explicit-contract-and-state-upgrades while signature constraint section (https://docs.corda.net/api-contract-constraints.html#signature-constraints) does not describe an update process for states.
is it the same as for explicit upgrades with the difference only in steps 6,8 or it is somewhat completely different?
Do I need to create the function transforming old states to new states in that case? if not , then how the old states will be handled by new flows?
I see you have many some great questions about contract upgrades. Here is an article that is written by one of our dev-relation engineers. https://medium.com/corda/contract-upgrades-and-constraints-in-corda-425055a9a47f
Feel free to follow up any additional questions that you have.
If you are new to Corda, feel free to join the Corda community channels #http://slack.corda.net/
While performing legacy contract upgrades, you need both the old and new contract jars installed on your node. (present in the cordapps folder).
You can create a new Gradle module say v2-contract and write the new contract in this. This is where you will write your UpgradedContract. You will need to refer to the old v1-conract jar as well as it needs to know what the old state was. To do this add a gradle dependency in v2-contract like below.
dependencies {
// Corda dependencies.
cordapp project(v1_contract)
}
The old jar can be removed from the cordapps folder, once all the states have been upgraded to new v2-contract.
a. For HashConstraints there is no need to run the bootstrapper again. You will write the new contract by implementing UpgradedContractWithLegacyConstraint,run the jar task to build this new jar, add it to the cordapps folder, run the Authorise Flow from all nodes, run the Initiate flow from one of he node's terminal. This is the explicit way of upgrading.
b. However if you are using Whitelistzoneconstraint, you want to make sure to add the new v2-contract jar's hash to whitelist param in network param. You will need to run the network Bootstrapper to whitelist this new v2-contracts jar hash. https://docs.corda.net/network-bootstrapper.html#whitelisting-contracts.
Once you do that you can either go for an explicit upgrade by implementing UpgradedContract, or you can use implicit upgrade.
c. If you are using Signature Constraints, no need to run the network Bootstrapper for the new jar, write the new v2-contract, build it using gradle jar task, replace old jar with new jar. This is the implicit way of upgrading.
You should run the Authorise Flow for all the participants only.
Other questions
There is no explicit upgrade in Signature constraints. You need to make sure you write your state in a backward compatible way, build new jar, replace old jar with new jar. The states will refer to the new contract then.
Hope that helps. Feel free to post more questions on the above answer or ping on Slack.

How is contract code distributed

Is the contract handling code essentially just Java and run by the server ?.
If I want to edit a contract's functionality do I have to release code to be installed across the network ?.
Great question. The Corda technical whitepaper talks about this. See e.g. section 5.9: https://docs.corda.net/_static/corda-technical-whitepaper.pdf
The short answer is that some of this infrastructure still needs to be built out but the key idea is that a State doesn't just say "the Java class with this name governs my evolution"; it says: "the Java class with this name, living in a JAR with this hash governs my evolution". So there will be no room for games caused by people trying to substitute malicious/compromised implementations.
As for how the code gets distributed: today, it is installed in each node locally. Very soon, it will be able to migrate around the network using the Attachments functionality.
And I should add: the contract verification logic will run in a very strict sandbox: both to limit what it can do and to ensure it is 100% deterministic... we can't have one node thinking a transaction is valid and another one thinking it is invalid!
As Richard notes, states reference contracts. Indeed, there is a contract property in the base ContractState interface:
#CordaSerializable
interface ContractState {
val contract: Contract
val participants: List<AbstractParty>
}
A Corda transaction is required to change any state property. Therefore, if one party wishes to novate/update the contract code then they must propose a transaction which changes the contract reference then ask all required participants to assent to this change.

Project using [Dapper] gets reported by Veracode as CWE ID 89 (Improper Neutralization of Special Elements used in an SQL Command)

We have a .Net 4.0 project that is being scanned by Veracode in order to acquire security certification.
During static scan the following vulnerability has been found:
Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection') (CWE ID 89) See details at https://cwe.mitre.org/data/definitions/89.html
The report details file & line number that seems to refer to Dapper:
OurOwnDll.dll dev/.../dapper net40/sqlmapper.cs 1138
App_Browsers.dll dev/.../sqlmapperasync.cs 126
OurOwnDll is using Dapper.
App_Browsers.dll I´m not aware where it is coming from, but seems to be related to the site project, and seems to be related to the browsers capabilities detection of asp.net.
I would like to know if there is any way to prevent this vulnerability.
I am not familiar with VeraCode, however as pointed out by #Kristen Waite Jukowski, your issue may be due to the fact that some of your queries are not parameterised, in which case they are correctly being identified as vulnerable to SQL injection.
Alternatively, a similar question (relating to the same issue but with OrmLite) may shed some light on this. Similar to OrmLite, as dapper provides the facility to write raw SQL queries that could be composed with inputs that are not parameterised (for example by string concatenation), using it may be deemed a vulnerability, even if every query in your particular project is currently fully parameterised. The answer to that question (which may not be viable in your case) was to replace the existing ORM with Entity Framework:
During a code-readout with VeraCode the suggested proper remediation
was to replace ServiceStack ORM with EntityFramework 6.1.
From the comments in that question:
The difference is in EF, the executing context implements IDbCommand
but the CreateDataAdapter and other api's that can allow dynamic sql
have been implemented to throw exceptions. There are no code paths in
EF that allow dynamic sql without first going through a filtering
mechanism similar to OWASP.

which s best way to test the database packages?

I am currently working on a project where we need to test the database packages and functions.
We need to provide the input parameters to the database package and test the packages returns the expected value, also we want to test the response time of the request.
Please advice, if there is any tool available to perform this or we can write our test cases in Junit or some other framework.
Which one will be best approach?
I've used a more native approach when I had to do DWH testing. I've arranged the Test framework around the Dev data integration framework that was already in place. So i had a lot of reusable jobs, configurations and code. But using OOP like you suggest
write our test cases in Junit
is a way to go too. But keep in mind that very often the DWH design is very complex (with a lot of aspects to consider) and interacting with the Persistence layer is not always the best candidate for testing strategy. A more DB oriented solution (like tSQLt) offers a significant performance.
Those resources helped me a lot:
dwh_testing
data-warehouse-testing
what-is-a-data-warehouse-and-how-do-i-test-it
My framework Acolyte provides a JDBC driver & tools, designed for such purposes (mock up, testing, ...): http://tour.acolyte.eu.org
It's used already in some open source projects (Anorm, Youtube Vitess, ...), either in vanilla Java, or using its Scala DSL.
handler = handleStatement.withQueryDetection(...).
withQueryHandler(/* which result for which query */).
withUpdateHandler(/* which result for which update */).
// Register prepared handler with expected ID 'my-unique-id'
acolyte.Driver.register("my-unique-id", handler);
// then ...
Connection con = DriverManager.getConnection(jdbcUrl);
// ... Connection |con| is managed through |handler|

Resources