I want to start using the framework Pact for JVM for contract testing. But does Pact JVM support REST and JMS?
I haven't found any information about this.
Yes, Pact JVM supports REST[1], its primary use case, and also supports JMS testing in the form of Messages [2].
Note that as of this moment, the other language implementations don't support this Message style test, but the work is in progress to make it happen.
[1] Assuming REST = JSON/HTTP
[2] https://github.com/DiUS/pact-jvm/tree/master/pact-jvm-consumer-junit#consumer-test-for-a-message-consumer
Pact is a restful consumer/provider testing tool only; It has to communicate over HTTP and the data contract can only be in JSON at the moment (this could change in the future). We have talked about supporting other protocols like messaging (websockets), but there isn't much need for it as of yet and we have other priorities to work on currently.
Related
This question is purely about semantical convention. I came onto a project where the architect named the API layer (.NET Core API) solution "middleware."
I have always referred to these projects as the API, e.g. MyMagicCompanyAPI.
To me, Middleware is usually the part of the code that intercepts http requests and does something before the request info is passed down the pipeline, e.g. .NET Core Middleware or the Angular Interceptor.
On that note, is it wrong to call an API middleware? If not, is it preferable/more accurate to just call it an API over calling it middelware?
Can a .NET Core API be Called “Middleware”?
Short answer: YES
Depending on the context in which it is being used.
.NET Core's, Angular's, (et al.) use of the term within their architecture is not the only contextual use of the term middleware.
As you said, semantics. Or better yet, it is a matter of context.
A whole API can be a middleware in a distributed system.
In distributed applications
The term is most commonly used for software that enables communication and management of data in distributed applications.
Other examples
The term middleware is used in other contexts as well. Middleware is sometimes used in a similar sense to a software driver, an abstraction layer that hides detail about hardware devices or other software from an application.
Reference https://en.wikipedia.org/wiki/Middleware
On that note, is it wrong to call an API middleware? If not, is it preferable/more accurate to just call it an API over calling it middelware?
That would be a matter of preference/opinion of the maintainer(s) of the said system as a whole.
In an application .Net XCC being used to make communication with marklogic module database to execute module, function and adhoc queries etc.
I want to replace the same XCC calls with REST calls so that we can run application in marklogic 9 as .Net XCC has been deprecated in Marklogic 9.
I have tried in built rest api in marklogic. It only allows to execute module exiting in module database.
Is there any online source stuffs available or anything that could help us.
Any help would be appreciated.
Thanks,
ArvindKr
There is /v1/invoke to invoke modules in the modules database attached to the REST app-server you are addressing, but also /v1/eval that allows running ad hoc queries.
HTH!
If you're going to replace XCC.NET with RESTful calls, try out XQRS, it allows you to build services in XQuery in a manner similar to JAX-RS for Java.
I only consider the following for cases such as yours, where compatibility with legacy code is useful or required and where other options are exausted. This is not an elegant approach, but it may be useful in special cases.
The XDBC protocol (which is what XCC uses) is supported natively on the exactly same app servers and ports which the REST API is exposed. You can see this on port 8000 in a default install. The server literally cannot tell a 'REST Application' and an 'XCC Application' apart except by the URI requested in the request (and in some cases additional headers like cookies). REST and XDBC are both HTTP based, and at the HTTP layer are very similar to the extent that they can share the same ports and configurations.
XDBC is 'passed through' the REST processing via the XML Rewriter. XDBC uses /eval and /invoke while REST uses /v1/eval and /vi/invoke. If you look at the default rewriter.xml for port 8000 you can see how the routing is made. While the XDBC protocol is not formally published its not difficult to 'reverse engineer' by looking at the XCC code (public java source) and the rewriter. For example its not difficult to construct URL and payload data to do a basic eval or invoke call. You should be able to replicate existing XCC.NET client behaviour exactly by using the /eval and /invoke endpoints (look for the xdbc attribute set in the rewriter.xml, this causes the request handling to use pure XDBC protocol and behaviour.
Another alternative, if you cannot solve the external variables problem is to write new 'REST Friendly' apis that then xdmp:invoke() on the legacy APIS passing in the appropriate namespaces. An option is to put the legacy code in an entirely seperate modules DB and then replicate the module URIs exactly with the new code. If you don't need to maintain co-existing versions then you modify the old code to remove the namespaces from the parameters or assign local variable aliases.
We have two micro-services: Provider and Consumer, both are built independently. Consumer micro-service makes a mistake in how it consumes Provider service (for whatever reason) and as a result, incorrect pact is published to the Pact Broker.
Consumer service build is successful (and can go all the way to release!), but next Provider service build will fail for the wrong reason. So we end up with the broken Provider service build and a broken release of Consumer.
What is the best practice to guard against situations like this?
I was hoping that Pact Broker can trigger the Provider tests automatically when contracts are published and notify Consumers if they fail, but it doesn't seem to be the case.
Thanks!
This is the nature of consumer-driven contracts - the consumer gets a significant say in the API!
As a general rule, if the contract doesn't change, there is no need to run the Provider build, albeit there is currently no easy way to know this in the Broker (see feature request https://github.com/bethesque/pact_broker/issues/48).
As for solutions you could use one or more of the below strategies.
Effective use of code branches
It is of course very important that new assumptions on the contract be validated by the Provider before the Consumer can be safely released. Have branches tested against the Provider before you merge into master.
But most importantly - you must be collaborating closely with the Provider team!
Use source control to detect a modified contract:
If you also checked the master pact files into source control, your CI build could conditionally act - if the contract has changed, you must wait for a green provider build, if not you can safely deploy!
Store in separate repository
If you really want the provider to maintain control, you could store contracts in an intermediate repository or file location managed by the provider. I'd recommend this is a last resort as it negates much of the collaboration pact intends to facilitate.
Use Pact Broker Webhooks:
I was hoping that Pact Broker can trigger the Provider tests automatically when contracts are published and notify Consumers if they fail, but it doesn't seem to be the case.
Yes, this is possible using web hooks on the Pact Broker. You could trigger a build on the Provider as soon as a new contract is submitted to the server.
You could envisage this step working with options 1 and 2.
See Using Pact where the Consumer team is different from the Provider team in our FAQ for more on this use case.
You're spot on, that is one of the current things lacking with the Pact workflow and it's something I've been meaning of working towards once a few other things align.
That being said, in the meantime, this isn't solving your current problem, so I'm going to suggest a potential workaround in your process. Instead of running the test for the consumer, them passing, and then releasing it straight away, you could have the test run on the consumer, then wait for the provider test to come back green before releasing the consumer/provider together. Another way would be to version your provider/consumer interactions (api versioning) so that you can release the consumer beforehand, but isn't "turned on" until the correct version of the provider is released.
None of these solutions are great and I wholeheartedly agree. This is something that I'm quite passionate about and will be working on soon to fix the developer experience with pact broker and releasing the consumer/provider in a better fashion.
Any and all comments are welcome. Cheers.
I think the problem might be caused by the fact that contracts are generated on the consumer side. It means that consumers can modify those contracts how they want. But in the end producer's build will suffer due to incorrect contracts generated by consumers.
Is there any way that contracts are defined by producer? As I think the producer is responsible for maintaining its own contracts. For instance, in case of Spring Cloud Contracts it is recommended to have contacts defined in producer sources (e.g. in the same git repo with producer source code) or in a separate scm repo that can be managed by producer and consumer together.
I used to use soap webservices for transferring chart data to my flex app, but recently switched over to using BlazeDS because of performance, convenient typing, etc.
I'm considering switching over to using JSON (as I do in other parts of the app) for these reasons:
Proliferation of DTOs for communicating with flex.* (With JSON, I just use JsonConfig to exclude properties as desired.)
Difficult to debug (whereas JSON is good ol' plaintext).
Problems with load balancing without sticky sessions.
Anyone else run into these problems with BlazeDS? Is BlazeDS worth the hassle?
* I could use the Externalizable interface instead of distinct DTOs, but it's also a pain.
I wouldn't give up on using remoting. Performance of remoting will be much better than JSON. Remember ActionScript doesn't have a method to decode JSON, so you'd need to use an AS library which will be slower than anything built into the player. You'd be better of using XML than JSON.
You should be able to exclude specific properties as desired by marking them as transient. ActionScript has [Transient] metadata and the idea came from Java. The C# library we use for remoting has Transient support. I'm sure BlazeDS does too.
Debugging is easy with the right tools. You should get Charles. It provides very nice views of AMF request and response messages (assuming you're using HTTP and not RTMP, I don't know about RTMP debugging).
http://www.charlesproxy.com/
You also seem to be choosing between BlazeDS and anything-not-remoting. You have more options. BlazeDS is just one remoting implementation that Adobe made available. They also have a commercial one. There are also many open-source remoting projects available. We use a wonderful one for C# called Fluorine. Open-source Java options are Red5 and OpenAMF, but I think there are others as well.
http://red5.org/
http://openamf.com/
There's also a distinction between RTMP and HTTP remoting. You can get data into Flex through either of these protocols and each will have it's advantages/disadvantages. I personally prefer HTTP remoting unless you absolutely need the functionality RTMP provides (push, streaming). HTTP will be easier to debug and should not have problems with a load balancer--it's just HTTP calls where the content happens to be binary.
Does Astoria Service Model only support
ATOM,JSON,XML,XML+HTTP
Are formats like SOAP,WSDL,ASMX outdated? .So when i wish to develop SOA can i ignore SOAP,ASMX,WSDL formats?
I would add to the above answer and say there is in-fact a way to discover the metadata about the Data Services (REST) endpoint. Every endpoint includes a service document (just do a GET on the root of the endpoint) that describes the sets exposed by the service. Further, going to the $metadata endpoint from the root of the service (i.e. http://mydomain/myservice.svc/$metadata) returns an XML metadata document that fully describes the service (the sets, types, properties on types, relationships between sets, and service operations).
No, most definitely not!
ASMX = ASP.NET webservices - this is outdated, it was introduced in .NET 1.0 and basically replaced with WCF in .NET 3.0.
BUT: WCF is definitely NOT outdated! WCF is the Microsoft standard way of communicating between two systems. It uses SOAP (including WSDL and XSD) by default, and this is mature and reliable technology which works well in enterprise scenarios where you need things like data integrity, (human and machine readable) service description through WSDL and service metadata, and so forth. SOAP also offers more advanced features like reliable messaging and transactional support.
REST / ADO.NET Data Services is a more lightweight, easier-to-get-at approach at exposing services, but it's lacking in many ways: there's no unified service description available, so you cannot really "discover" what methods and what datatypes the service offer; either you have knowledge yourself, or the service provider gives you a documentation in plain English, but there's no standard way of describing a REST service to the outside world (yet). Also, you don't really know ahead of time what kind of data that service might return - there's no XML schema to stick to - it's more of a "let's hit the service and see what comes back" approach which might work quite OK in some cases, but not really in larger scale, enterprise-style environments.
So to sum up: the SOAP (WSDL,XSD) vs. REST debate is ongoing, both have their reasons to be, and I don't see one of them replacing the other - they're supplanting one another.