I found PACT in some videos from youtube and looks great and quite interested to start POC for my team.
I've read previous questions and try to follow the examples in Pact-JS, but still had some confusion on very basic stuff, so excuse my noob questions.
1. Which repo do I need to refer as official repo?
I assumed ones under Pact-foundation organization are official, but some links in document usually go to different ones.
2. What do I need and from which repo for all the parts of PACT working?
Consumer/provider.
For the start, I think I need PACT_JS.
github.com/pact-foundation/pact-js
Mock service.
Do I need either pact-node or pact-mock-service-npm, or both as well for mock service?
github.com/pact-foundation/pact-node
github.com/pact-foundation/pact-mock-service-npm
Broker
If I want to use broker, then this will need.
github.com/pact-foundation/pact_broker
I think those 3 are the parts I need to use. Is it correct?
3. If there are multiple teams involved, does 1 shared mock server need/help or not really matter? I'm not clear the benefit of stand alone mock server.
https://github.com/pact-foundation/pact-js is the official top-level JS repo.
a) That's correct:
b) You won't need to explicitly include it soon (I'm in the process of an API uplift which should simplify usage), but currently you will need to pull in pact-node to do provider verification
c) If you want to share via a broker, head to https://github.com/bethesque/pact_broker for details (this is not strictly necessary, but recommended)
You won't need the standalone mock service if you use Pact JS. It is designed to be used in cases where there is no language support for Pact (in this case, JS wraps this under the hood for you)
I would check out the end-to-end example which contains all that you'll probably need, including integration to a Broker.
Related
I want to share the pact file from consumer to bitbucket and then provider can use from the same location.
Does anybody implemented this?
Thanks in advance.
It's worth noting that it's very much a non-standard way of doing it and I would highly recommend you don't do it this way (see https://docs.pact.io/pact_nirvana/step_4/). You're going to have to build a lot of the workflows again, and that will require investment in building tooling and coming up with ideas to evolve the contract. At some point, you'll be rebuilding key features the Pact Broker already has and would be better off just running that or using a hosted service like pactflow.io.
So, without a pact broker, you don't get the can-i-deploy tool, versioning, environment management and all of these powerful workflows.
With that said - if you do want to use bitbucket:
you'll need to create a manual process (i.e. script) to upload (from consumer) and download (provider) from bitbucket
You'll only want to upload from CI, so that's easier to control
For provider verification, you really want to do this on a laptop, so you'll need a standard approach for pulling down the correct contract to verify there and on CI. Every team member will need credentials to read from that repo
Pact doesn't have a mechanism to pull from git protocols, but you could potentially add that feature to the languages you need them for
I prefer to keep my handlers free from ASP.NET infrastructure that is very hard to test (yes, even in ASP.NET Core). But sometimes it happens and you have a dependency like UserManager (I'd like to know one day why it's not an interface), HttpContext, etc. and unit-tests are turned into a mocking-hell.
I tried using integration testing to deal with it by creating a TestServer and having all the ASP.NET infrastructure initialized for every api call. It works quite well but sometimes seems like an overkill if I want to test simple logic of my handler. And while it solves technical problem of mocking ASP.NET infrastructure, it keeps architectural problem (if you consider it so) of having ASP.NET infrastructure into your handlers.
I'd like to know what are the recommended approaches to deal with it?
I feel your pain. I stumbled across a fantastic blog post from Jimmy Bogard that handles this problem by using what Martin Fowler calls Subcutaneous Tests. I will leave the deep explanation to those experts but in a nutshell subcutaneous tests simply avoid all the difficult to test aspects of the UI.
Shameless plug: I am currently in the process of writing up a wiki that demonstrates these patterns in a sample end-to-end project on github. It's not difficult to follow but is probably too much code to post for a SO answer.
To Summarize:
If you are using MediatR correctly your controllers should be very thin which makes testing them pointless.
What you want to test are your handlers.
However, you want to test your handlers as part of your real world pipeline.
To Solve:
Wrap the http request in a transaction.
Build a test fixture that mimics the applications Startup.cs
Setup a test db server to execute queries and commands against but also is reset after each test.
That's basically it. Everytime you run an integration test against one of your handlers:
The hosting environment is mocked but your application is started up in a real world test.
Your query or command is wrapped in a transaction mimicking your DbContext.
The handler is executed against a real database and then reset.
I would add more code examples to my answer but between the blog post and the wiki I provided, it is much easier to follow the code examples there.
Edit 8/2021:
Stick with the source. Jimmy Bogard keeps the contoso university project current on his github page. Another great and a little more advanced example is the modular monolith project by Kamil Grzybek. That also is updated regularly on his github page.
Mediatr or no, you should always try to have only very basic pass this along logic in your controllers and call injected business logic classes from there to do the actual work. As you inject them with interfaces to this business logic, your controllers' dependencies are easily mocked in your unit tests, and your tests can focus on if they implement those interfaces properly and do only the basic work of routing input/output. And your actual business logic can be tested even easier.
For those classes that are static, for instance for reading the web.config settings, one strategy that I like a lot is make an interfaced wrapper class around them. While ConfigurationManager is static, I can still just write a regular class with an interface that I put methods or properties on to read a specific setting (preferably semantically named) from the Configuration Manager. Now I can easily mock any configured setting (or absence of it) in my test by just mocking the interface and setting up different return values.
I'd say it depends on the level of confidence you want to get in the end. If you want to make sure the whole system works as expected, then integration tests using a TestServer are probably the way to go.
One advantage of MediatR, though, is it allows you to decouple your business logic from the application using it, which is why at the very top level, let's say in controllers, there's no logic but just a delegation to the mediator.
That being said, you're right that sometimes your logic needs information from the hosting application. An example would be the user making the request, which is accessible in the HTTP context.
In that case, if you want to avoid having to set up a test HTTP server to test your logic works, you could represent that information in an abstraction and your handler would then take a dependency on that abstraction. Your tests could then mock that dependency while using the real system for everything else.
Does that make sense?
I've been looking into scaling Meteor, and had an idea by using the Meteor Cluster package;
Create a super-service*, which the user connects to, containing general core packages to be used by every micro-service (api, app, salesSite, etc. would make use of its package),
The super-service then routes to the appropriate micro-service (e.g., the app), providing it with the functionality of its own packages.
(* - as in super- and sub-, not that it's awesome... I mean it is but...)
The idea being that I can cascade each service as a superset of the super-service. This would also allow me to cleverly inherit functionality for other services in a cascading service style. E.g.,
unauthedApp > guestApp > userApp > modApp > adminApp,
for the application, where the functionality of the previous service are inherited to the preceding service (e.g., the further right along that chain, the more extra functionality is added and inherited).
Is this possible?
EDIT: If possible, is there a provided example of how to implement such a pattern using micro-services?
[[[[[ BIG EDIT #2: ]]]]]
Think I'm trying to make a solution fit the problem, so let me re-explain so this question can be answered based on the issue rather than the solution I'm trying to implement.
Basically, I want to "inherit" (for lack of a better word) the packages depended on needed functionality, so that no code is unnecessarily sent through the wire.
So starting with the core packages, which has libraries I want all of my services to have, I then want to further "add" the functionality as needed. Then I want to add page packages if serving a page-based service (instead of, say, the API service, which doesn't render pages), then the appropriate role-based page packages, etc., until the most specific packages are added.
My thought was that I could make the services chain in such a way that I could traverse through from the most generic to most specific service, and that would finally end with a composition of packages from multiple services. So, for e.g., the guestApp, that might be the core packages + generic page packages + generic app packages + unauthApp packages + guestApp packages, so no unneccessary packages are added.
Also with this imaginary pattern I'm describing, I don't need to add all my core packages to each microservice - I can deal with them all within the core package right at the top of the package traversal I've discussed above and not have to worry about forgetting to add the packages to the "inherited" packages.
Hope my reasoning here makes sense, and I hope you guys know of a best practice for doing this. Thank you!
Short answer:
Yes! That's a good use to a microservice architecture.
Long answer:
Microservices don't necessarily provide you an inheritence mechanism as in OOP. You should consider microservices as independent "functions" which take in an input and respond with an output/action. Any microservice can depend on another to complete its own task.
And then, you "compose" necessary microservices in order to achieve the final output/action.
You can have one or few web facing "frontend" services that use a mix of few other backend microservices whose ports are not open to the public network.
The drawback with a microservice would be its "minimum footprint". The idea with microservices is around some main benefits:
Separate core services so that they can be "maintained" independently
Separate core services so that they can be "replaced" independently
Separate core services so that they can be "scaled" independently
But then, each microservice, being a node/meteor app, will have its minimum cpu/ram footprint even when they are just idle and waiting for a connection.
Furthermore, managing a single monolithic app, or just a few "largish" services is much easier, from a devops standpoint, than managing tens of individual deployments.
So with all engineering decisions, the right answer would imply some kind of "balance".
Edit: reference to inheritence
As per the OP's comment, the microservices can indeed be referenced from a parent code as either functions or classes and be composed (functions) or inherited from (classes) because after all the underlying functionality are DDP endpoints.
If you are using the cluster package from meteorhacks
// create a connection to your microservice
var someService = Cluster.discoverConnection("someService");
// call a normal meteor method from that service
var resultFromSomeService = someService.call("someMethodFromSomeService");
So as with any piece of javascript code, you can wrap the above piece of code in a function or a class with its constructor and all and inherit from it, exposing its interfaces as you desire.
can any one explain java callout a little help will do.Actually i am having several doubts regarding where to add the expressions and message flow jar and where to add my custom jar.
Can i access the resources/java folder directly and can i use it to store my data?
First, check the docs on apigee at
Customize an API using Java
http://apigee.com/docs/api-services/content/customize-api-using-java
Keep in mind Java Callouts are only supported in the paid, Apigee Edge product, not the free Developer platform.
As you decide how to use Java, you should consider this basic hierarchy of policy management:
Policy Configuration First: Apigee policy configurations are in broad use and therefore tested daily by clients and most performant.
Javascript Callout: For stuff you can't do in a standard policy there is Javascript -- keep in mind this is "Compiled Javascript" which means at the time you deploy your project the JS gets interpreted by the Java Rhino engine and then runs like native code. Very fast, very scalable, and very easy to manage as your code is all in plain text files.
Java: You have to have a pretty compelling reason to use Java. Most common cases are where you have some complex connection that needs to be negotiated with custom encryption schemas or manipulating binary content. While perfomant, it's the most difficult code to manage (you upload compiled jars, so if someone takes over your work, the source code is in a separate place than your deployment bundle), and it's the most difficult to debug in the event of a failure.
To your specific question: All Apigee variables are available in Java and Java gives you pretty much god-like powers on the local server where the code is executed. Keep in mind, Apigee's physical architecture is distributed -- your jar may run on different servers for different API calls, so any persistent data (that you might want to store locally) should really be put into Key Value Map and read as needed. Keep your API development as stateless as possible.
Hope that helps.
I work in a software company that delivers a software product. Many times we must integrate with other applications. 70% of the time we integrate with a single application. Currently we do not use middleware (MuleESB, Biztalk, ...) in these situations: the data conversion, transport conversion, etc is handled inside the applications.
Wouldn't it be better to ALWAYS use a middleware solution? (no matter if your integrating with 1 or more systems) This way, all the customizations (data formatting, restructuring, transport conversions) of both parties, can be handled by the middleware, instead of by the applications.
Logically this seems to me the right approach. But I ask myself: Is middleware in the case of two applications justifiable?
In practical terms, you will always be using a "Middleware" solution, either custom or packaged.
I'd look at it this way, rather than requireing a "middleware" package, I'd focus on making the app itself integration friendly by using a consistent and as-standard-as-practical API for exchanging data.
Then the decision on what "middleware" to use is more driven by the circumstance on site. If the customer has only one app to integrate, then a simple custom solution might be perfectly serviceable. If they have 5 apps and defined processes for each, then a package makes more sense.
I don't always use middleware, but when I do, I use BizTalk. ;)
I found this to be a useful read when considering appropriate architecture. If you get a chance to see Richard Seroter's "Decision Framework" presentation, you should.
Wiki site on Publish subscribe pattern http://en.wikipedia.org/wiki/Publish/subscribe shows an interesting comparison between how publish subscribe relates to client server.