We interact with a lot of services in our api and use freemaker templates to mock the responses of the different services. Would using PACT(Consumer driven contract testing) be more beneficial for us in any way?
Yes, Pact can be used as a replacement for local stubs.
If you need lots of flexibility in the responses, then it's probably going to be somewhat limiting.
Related
We have 5 microservices (more on the way) that communicate with each other asynchronously. 3 of these microservices do not have any API. Those consume data from a message queue, do some processing, and write data into another queue. 2 of these microservices do have APIs, and those also consume data from the queues but send the response back to the caller.
Given that, for testing the service interactions, correctness of contracts, and end-to-end flow:
what would be the best way to test the asynchronous services that read from and write to queues?
would consumer-contract test be applicable anywhere?
I feel end-to-end production testing is possible, but can something more granular and effective be done?
First, let's correct one thing - you do have APIs. The messages that the "API-less" services read must have some defined content or format. That's an API. You should be testing it, both positive and negative.
Before you get to whole-system testing (in a test or staging environment that mimics your production environment) your testing should probably be in layers, much as in any other system.
Unit tests to test the behavior of each class. In your message-handling classes, for example, call each with messages that prompt specific actions and test that it works correctly. These can usually be run very quickly so it's easy to run them often while developing code.
Integration tests to make sure your interaction with next-level external systems works. You can use, for example, testcontainers to run isolated instances of the queueing system that your services interact with.
Exactly what form these tests take depends on what languages and frameworks your system is built with. I briefly looked at the Pact tool you referenced and it is taking a similar approach.
Looks like with Pact it is possible to do message-based contract testing. I think I have a path forward with this.
We are building a platform based on microservices. The platform will provide a number of basic functions that will be used in various independent projects.
We need to come up with such an architecture that will allow us to extend the basic functionality and interact with existing services. The main task is to ensure that the code of the main services remains unchanged, and custom solutions based on the platform can be easily reused.
We are considering several options. For example, there is a certain service "foo" which provides the functions foo1 and foo2. To extend the functions, we can create an independent "foobar" service and put it in front of the foo service, accept API requests, execute custom functions, and then redirect the request to foo. It turns out a kind of intermediary service, which acts as the main link in the implementation of the functions specific to a given project and the main platform. The advantage of this approach can be attributed to complete independence from the code base of basic services. And the main disadvantage is the complexity of implementation and the need to greatly fragment the functions of the main service.
The second option under consideration is similar to the approach that is often used in monolithic applications - the hook system, which allows you to override the behavior of the system. For example, you can create an independent service that will connect events and subscribers.
This approach is more flexible, but at the same time it is still quite difficult to implement. The main disadvantage of this approach is synchronous blocking network calls.
The third option that we are considering is the construction of the microservices themselves in such a way that additional modules can be added to them so that services can be customized at the build stage. The main code remains unchanged, but inside the process, the already mentioned scheme of hooks and events is implemented (at the code level of individual services). Benefit is ease of implementation. Of the cons, it is very difficult to implement customizations in the case multiple services are involved.
Perhaps we are trying to invent a bicycle and there are exist good solution(s) for the problem. If you know such, or you have a good idea about possible ways to solve this problem, please share.
I don't think there is an answer to this generic problem. The third solution seems pretty good. You might take advantage of chain-of-responsabilities or request-response pipelines paradigm to implement the open/closed principle pretty easily. This should ensure that you can add/modify features into your foo service without touching the existing code.
If you are looking for something to tackle to whole architecture rather than a single microservice, you might want to give a look at the Event-Driven microservices design.
This design approach is similar to your option 2, with the difference that it uses a message broker and async communication to let the microservices communicate.
There are a lot of advantages, like more resiliency and decoupling of the services.
The intent is to create a set of web services that people can reuse. These services mostly interact with a backend DB creating, retreiving and processing data.
We want to expose services so that people can use to create data mashups and other applications.
End users are webpages that can be within our domain or outside our domain. For pages outside the domain we plan to release widgets that would be configured to retreive and display the data.
One requirement - application should be extremely scalable in terms of the number of users it can handle.
Our code base is .net and we are looking at ASPX webmethods (or ASHX), ASMX webmethods and WCF (starting to read up on WCF).
In terms of security/access I found that maintaining sessionid, memberships is doable in all three. WCF seems a bit complicated to setup. I could not immediately see the value of asmx when we can get all done just using a webmethod in aspx (with a little tweaking).
Also, assuming that with the ASP.NET MVC2 I might be able to get clean urls as well for these webmethods.
Questions
Which one will be the most effective in terms of performance and scalability?
Any reason why I should choose WCF or ASMX?
Thank you for taking the time to read through this post and apologies for the naive questions since I am new to .net.
EDIT I kind of understand that WCF is the way to go. Just to understand the evolution of the technologies it would be good if someone can throw light on why a aspx webmethod is different from an asmx when similar things (apart from discovery) can be accomplished by both. The aspx webmethods can be made to return data in other formats (plaintext, json). Also, it seems that we can build restful services using ashx. Apologies again for the naive questions.
You should use WCF for developing webservices in .Net. WCF is highly configurable with many options for security, transport protocols, serialization, extensions etc. Raw performance is also significantly higher. Also WCF is being actively developed and many new features being added in version 3.5 and 4. There are also variations like WCF data services and WCF RIA services. WCF 4.0 also has better REST and JSON support which you can directly use in ASP.Net / JQuery.
ASMX is considered deprecated technology and replaced by WCF. So if you are going to start new development which requires exposing reusable services, WCF is the way to go.
I am not necessarily disagreeing with previous answer. But, from a different perspective, WFC is tricky to configure. It requires bindings, endpoints, packet sizes, a lot of confussing parameters, etc in your configuration files, and there are many serialization/deserialization issues reported. Also WCF is a relatively new technology (therefore still exposed to bugs and patches needed).
The client-generated [Reference.cs] files might have unwanted interfaces, and each public property client class exposed in the WSDL gets generated with the same observer pattern that LINQ to SQL or Entity Framework uses ( OnChanged, OnChanging, etc) so this adds a lot of fat to the client code, as opposed to the traditional SOAP Web client way.
My recommendation, if you aren't using Remoting over TCP or if you don't need the 2-way notification mechanism for remote changes - all these are very cool features of WCF - you don't need to use it.
Does Astoria Service Model only support
ATOM,JSON,XML,XML+HTTP
Are formats like SOAP,WSDL,ASMX outdated? .So when i wish to develop SOA can i ignore SOAP,ASMX,WSDL formats?
I would add to the above answer and say there is in-fact a way to discover the metadata about the Data Services (REST) endpoint. Every endpoint includes a service document (just do a GET on the root of the endpoint) that describes the sets exposed by the service. Further, going to the $metadata endpoint from the root of the service (i.e. http://mydomain/myservice.svc/$metadata) returns an XML metadata document that fully describes the service (the sets, types, properties on types, relationships between sets, and service operations).
No, most definitely not!
ASMX = ASP.NET webservices - this is outdated, it was introduced in .NET 1.0 and basically replaced with WCF in .NET 3.0.
BUT: WCF is definitely NOT outdated! WCF is the Microsoft standard way of communicating between two systems. It uses SOAP (including WSDL and XSD) by default, and this is mature and reliable technology which works well in enterprise scenarios where you need things like data integrity, (human and machine readable) service description through WSDL and service metadata, and so forth. SOAP also offers more advanced features like reliable messaging and transactional support.
REST / ADO.NET Data Services is a more lightweight, easier-to-get-at approach at exposing services, but it's lacking in many ways: there's no unified service description available, so you cannot really "discover" what methods and what datatypes the service offer; either you have knowledge yourself, or the service provider gives you a documentation in plain English, but there's no standard way of describing a REST service to the outside world (yet). Also, you don't really know ahead of time what kind of data that service might return - there's no XML schema to stick to - it's more of a "let's hit the service and see what comes back" approach which might work quite OK in some cases, but not really in larger scale, enterprise-style environments.
So to sum up: the SOAP (WSDL,XSD) vs. REST debate is ongoing, both have their reasons to be, and I don't see one of them replacing the other - they're supplanting one another.
I used to create normal webservices in my websites, and call these services from javascript to make ajax calls.
Now i am learning about Ado Data Services,
My question is:
Does this Ado Data Services can replace my normal webservice in new sites i will create?
And if Yes,
Can i put these Ado Data Services in a separate project "local on the same server" and just reference from my website? "to use the same services for my websites internal use and also give the same services to other websites or services, the same as twitter for example doing"
depends what you want to do , I suggest you read my conversation with Pablo Castro the architect of Ado.Net Data Services
Data Services - Lacking
Here is basically Pablo's words.
I agree that some of these things are quite inconvenient and we're looking at fixing them (e.g. use of custom types in addition to types defined in the input model in order to produce custom result-sets). However, some others are just intrinsic to the nature of Data Services.
The Data Services framework is not a gateway to a database and in general if you need something like that then Data Services will just get in the way. The goal of Data Services is to create a resource model out of an input data model, and expose it with a RESTful interface that exposes the uniform interface, such that every unit of data in the underlying model ("entities") become an addressable resource that can be manipulated with the standard verbs.
Often the actual implementation of a RESTful interface includes more sophisticated behaviors than just doing CRUD over the data under the covers, which need to be defined in a way that doesn't break the uniform interface. That's why the Data Services server runtime has hooks for business logic and validation in the form of query/change interceptors and others. We also acknowledge that it's not always possible or maybe practical to model absolutely everything as resources operated with standard verbs, so we included service operations as a escape-hatch.
Things like joins dilute the abstraction we're trying to create. I'm not saying that they are bad or anything (relational databases without them wouldn't be all that useful), it's just that if what's required for a given application scenario is the full query expressiveness of a relational database to be available at the service boundary, then you can simply exchange queries over the wire (and manage the security implications of that). For joins that can be modeled as association traversals, then data services already has support for them.
I guess this is a long way to say that Data Services is not a solution for every problem that involves exposing data to the web. If you want a RESTful interface over a resource model that matches our underlying data model, then it usually works out well and it will save you a lot of work. If you need a custom inteface or direct access to a database, then Data Services is typically not the right tool and other framework components such as WCF's SOAP and REST support do a great job at that.