What is the standard pattern of orchestrating microservices?
If a microservice only knows about its own domain, but there is a flow of data that requires that multiple services interact in some manner, what's the way to go about it?
Let's say we have something like this:
Invoicing
Shipment
And for the sake of the argument, let's say that once an order has been shipped, the invoice should be created.
Somewhere, someone presses a button in a GUI, "I'm done, let's do this!"
In a classic monolith service architecture, I'd say that there is either an ESB handling this, or the Shipment service has knowledge of the invoice service and just calls that.
But what is the way people deal with this in this brave new world of microservices?
I do get that this could be considered highly opinion-based. but there is a concrete side to it, as microservices are not supposed to do the above.
So there has to be a "what should it by definition do instead", which is not opinion-based.
Shoot.
The Book Building Microservices describes in detail the styles mentioned by #RogerAlsing in his answer.
On page 43 under Orchestration vs Choreography the book says:
As we start to model more and more complex logic, we have to deal with
the problem of managing business processes that stretch across the
boundary of individual services. And with microservices, we’ll hit
this limit sooner than usual. [...] When it comes to actually
implementing this flow, there are two styles of architecture we could
follow. With orchestration, we rely on a central brain to guide and
drive the process, much like the conductor in an orchestra. With
choreography, we inform each part of the system of its job and let it
work out the details, like dancers all find‐ ing their way and
reacting to others around them in a ballet.
The book then proceeds to explain the two styles. The orchestration style corresponds more to the SOA idea of orchestration/task services, whereas the choreography style corresponds to the dumb pipes and smart endpoints mentioned in Martin Fowler's article.
Orchestration Style
Under this style, the book above mentions:
Let’s think about what an orchestration solution would look like for
this flow. Here, probably the simplest thing to do would be to have
our customer service act as the central brain. On creation, it talks
to the loyalty points bank, email service, and postal service [...],
through a series of request/response calls. The
customer service itself can then track where a customer is in this
process. It can check to see if the customer’s account has been set
up, or the email sent, or the post delivered. We get to take the
flowchart [...] and model it directly into code. We could even use
tooling that implements this for us, perhaps using an appropriate
rules engine. Commercial tools exist for this very purpose in the form
of business process modeling software. Assuming we use synchronous
request/response, we could even know if each stage has worked [...]
The downside to this orchestration approach is that the customer
service can become too much of a central governing authority. It can
become the hub in the middle of a web and a central point where logic
starts to live. I have seen this approach result in a small number of
smart “god” services telling anemic CRUD-based services what to do.
Note: I suppose that when the author mentions tooling he's referring to something like BPM (e.g. Activity, Apache ODE, Camunda). As a matter of fact, the Workflow Patterns Website has an awesome set of patterns to do this kind of orchestration and it also offers evaluation details of different vendor tools that help to implement it this way. I don't think the author implies one is required to use one of these tools to implement this style of integration though, other lightweight orchestration frameworks could be used e.g. Spring Integration, Apache Camel or Mule ESB
However, other books I've read on the topic of Microservices and in general the majority of articles I've found in the web seem to disfavor this approach of orchestration and instead suggest using the next one.
Choreography Style
Under choreography style the author says:
With a choreographed approach, we could instead just have the customer
service emit an event in an asynchronous manner, saying Customer
created. The email service, postal service, and loyalty points bank
then just subscribe to these events and react accordingly [...]
This approach is significantly more decoupled. If some
other service needed to reach to the creation of a customer, it just
needs to subscribe to the events and do its job when needed. The
downside is that the explicit view of the business process we see in
[the workflow] is now only implicitly reflected in our system [...]
This means additional work is needed to ensure that you can monitor
and track that the right things have happened. For example, would you
know if the loyalty points bank had a bug and for some reason didn’t
set up the correct account? One approach I like for dealing with this
is to build a monitoring system that explicitly matches the view of
the business process in [the workflow], but then tracks what each of
the services do as independent entities, letting you see odd
exceptions mapped onto the more explicit process flow. The [flowchart]
[...] isn’t the driving force, but just one lens through
which we can see how the system is behaving. In general, I have found
that systems that tend more toward the choreographed approach are more
loosely coupled, and are more flexible and amenable to change. You do
need to do extra work to monitor and track the processes across system
boundaries, however. I have found most heavily orchestrated
implementations to be extremely brittle, with a higher cost of change.
With that in mind, I strongly prefer aiming for a choreographed
system, where each service is smart enough to understand its role in
the whole dance.
Note: To this day I'm still not sure if choreography is just another name for event-driven architecture (EDA), but if EDA is just one way to do it, what are the other ways? (Also see What do you mean by "Event-Driven"? and The Meanings of Event-Driven Architecture). Also, it seems that things like CQRS and EventSourcing resonate a lot with this architectural style, right?
Now, after this comes the fun. The Microservices book does not assume microservices are going to be implemented with REST. As a matter of fact in the next section in the book, they proceed to consider RPC and SOA-based solutions and finally REST. An important point here is that Microservices does not imply REST.
So, What About HATEOAS? (Hypermedia as the Engine of Application State)
Now, if we want to follow the RESTful approach we cannot ignore HATEOAS or Roy Fielding will be very much pleased to say in his blog that our solution is not truly REST. See his blog post on REST API Must be Hypertext Driven:
I am getting frustrated by the number of people calling any HTTP-based
interface a REST API. What needs to be done to make the REST
architectural style clear on the notion that hypertext is a
constraint? In other words, if the engine of application state (and
hence the API) is not being driven by hypertext, then it cannot be
RESTful and cannot be a REST API. Period. Is there some broken manual
somewhere that needs to be fixed?
So, as you can see, Fielding thinks that without HATEOAS you are not truly building RESTful applications. For Fielding, HATEOAS is the way to go when it comes to orchestrating services. I am just learning all this, but to me, HATEOAS does not clearly define who or what is the driving force behind actually following the links. In a UI that could be the user, but in computer-to-computer interactions, I suppose that needs to be done by a higher level service.
According to HATEOAS, the only link the API consumer truly needs to know is the one that initiates the communication with the server (e.g. POST /order). From this point on, REST is going to conduct the flow, because, in the response of this endpoint, the resource returned will contain the links to the next possible states. The API consumer then decides what link to follow and move the application to the next state.
Despite how cool that sounds, the client still needs to know if the link must be POSTed, PUTed, GETed, PATCHed, etc. And the client still needs to decide what payload to pass. The client still needs to be aware of what to do if that fails (retry, compensate, cancel, etc.).
I am fairly new to all this, but for me, from HATEOAs perspective, this client, or API consumer is a high order service. If we think it from the perspective of a human, you can imagine an end-user on a web page, deciding what links to follow, but still, the programmer of the web page had to decide what method to use to invoke the links, and what payload to pass. So, to my point, in a computer-to-computer interaction, the computer takes the role of the end-user. Once more this is what we call an orchestrations service.
I suppose we can use HATEOAS with either orchestration or choreography.
The API Gateway Pattern
Another interesting pattern is suggested by Chris Richardson who also proposed what he called an API Gateway Pattern.
In a monolithic architecture, clients of the application, such as web
browsers and native applications, make HTTP requests via a load
balancer to one of N identical instances of the application. But in a
microservice architecture, the monolith has been replaced by a
collection of services. Consequently, a key question we need to answer
is what do the clients interact with?
An application client, such as a native mobile application, could make
RESTful HTTP requests to the individual services [...] On the surface
this might seem attractive. However, there is likely to be a
significant mismatch in granularity between the APIs of the individual
services and data required by the clients. For example, displaying one
web page could potentially require calls to large numbers of services.
Amazon.com, for example,
describes how some
pages require calls to 100+ services. Making that many requests, even
over a high-speed internet connection, let alone a lower-bandwidth,
higher-latency mobile network, would be very inefficient and result in
a poor user experience.
A much better approach is for clients to make a small number of
requests per-page, perhaps as few as one, over the Internet to a
front-end server known as an API gateway.
The API gateway sits between the application’s clients and the
microservices. It provides APIs that are tailored to the client. The
API gateway provides a coarse-grained API to mobile clients and a
finer-grained API to desktop clients that use a high-performance
network. In this example, the desktop clients make multiple requests
to retrieve information about a product, whereas a mobile client
makes a single request.
The API gateway handles incoming requests by making requests to some
number of microservices over the high-performance LAN. Netflix, for
example,
describes
how each request fans out to on average six backend services. In this
example, fine-grained requests from a desktop client are simply
proxied to the corresponding service, whereas each coarse-grained
request from a mobile client is handled by aggregating the results of
calling multiple services.
Not only does the API gateway optimize communication between clients
and the application, but it also encapsulates the details of the
microservices. This enables the microservices to evolve without
impacting the clients. For example, two microservices might be
merged. Another microservice might be partitioned into two or more
services. Only the API gateway needs to be updated to reflect these
changes. The clients are unaffected.
Now that we have looked at how the API gateway mediates between the
application and its clients, let’s now look at how to implement
communication between microservices.
This sounds pretty similar to the orchestration style mentioned above, just with a slightly different intent, in this case, it seems to be all about performance and simplification of interactions.
Trying to aggregate the different approaches here.
Domain Events
The dominant approach for this seems to be using domain events, where each service publish events regarding what have happened and other services can subscribe to those events.
This seems to go hand in hand with the concept of smart endpoints, dumb pipes that is described by Martin Fowler here: http://martinfowler.com/articles/microservices.html#SmartEndpointsAndDumbPipes
Proxy
Another apporach that seems common is to wrap the business flow in its own service.
Where the proxy orchestrates the interaction between the microservices like shown in the below picture:
.
Other patterns of the composition
This page contains various composition patterns.
So, how is orchestration of microservices different from orchestration of old SOA services that are not “micro”? Not much at all.
Microservices usually communicate using http (REST) or messaging/events. Orchestration is often associated with orchestration platforms that allow you to create a scripted interaction among services to automate workflows. In the old SOA days, these platforms used WS-BPEL. Today's tools don't use BPEL. Examples of modern orchestration products: Netflix Conductor, Camunda, Zeebe, Azure Logic Apps, Baker.
Keep in mind that orchestration is a compound pattern that offers several capabilities to create complex compositions of services. Microservices are more often seen as services that should not participate in complex compositions and rather be more autonomous.
I can see a microservice being invoked in an orchestrated workflow to do some simple processing, but I don’t see a microservice being the orchestrator service, which often uses mechanisms such as compensating transactions and state repository (dehydration).
So you're having two services:
Invoice micro service
Shipment micro service
In real life, you would have something where you hold the order state. Let's call it order service. Next you have order processing use cases, which know what to do when the order transitions from one state to another. All these services contain a certain set of data, and now you need something else, that does all the coordination. This might be:
A simple GUI knowing all your services and implementing the use cases ("I'm done" calls the shipment service)
A business process engine, which waits for an "I'm done" event. This engine implements the use cases and the flow.
An orchestration micro service, let's say the order processing service itself that knows the flow/use cases of your domain
Anything else I did not think about yet
The main point with this is that the control is external. This is because all your application components are individual building blocks, loosely coupled. If your use cases change, you have to alter one component in one place, which is the orchestration component. If you add a different order flow, you can easily add another orchestrator that does not interfere with the first one. The micro service thinking is not only about scalability and doing fancy REST API's but also about a clear structure, reduced dependencies between components and reuse of common data and functionality that are shared throughout your business.
HTH, Mark
If the State needs to be managed then the Event Sourcing with CQRS is the ideal way of communication. Else, an Asynchronous messaging system (AMQP) can be used for inter microservice communication.
From your question, it is clear that the ES with CQRS should be the right mix. If using java, take a look at Axon framework. Or build a custom solution using Kafka or RabbitMQ.
You can implement orchestration by using spring State machine model.
Steps
Add below dependency to your project ( if you are using Maven)
<dependency>
<groupId>org.springframework.statemachine</groupId>
<artifactId>spring-statemachine-core</artifactId>
<version>2.2.0.RELEASE</version>
</dependency>
Define states and events e.g. State 1, State 2 and Event 1 and Event 2
Provide state machine implementation in buildMachine() method.
configureStates
configureTransitions
Send events to state machine
Refer to documentation page for complete code
i have written few posts on this topic:
Maybe these posts can also help:
API Gateway pattern - Course-grained api vs fine-grained apis
https://www.linkedin.com/pulse/api-gateway-pattern-ronen-hamias/
https://www.linkedin.com/pulse/successfulapi-ronen-hamias/
Coarse-grained vs Fine-grained service API
By definition a coarse-grained service operation has broader scope than a fine-grained service, although the terms are relative. coarse-grained increased design complexity but can reduce the number of calls required to complete a task. at micro-services architecture coarse-grained may reside at the API Gateway layer and orchestrate several micro-services to complete specific business operation. coarse-grained APIs needs to be carefully designed as involving several micro-services that managing different domain of expertise has a risk to mix-concerns in single API and breaking the rules described above. coarse-grained APIs may suggest new level of granularity for business functions that where not exist otherwise. for example hire employee may involve two microservices calls to HR system to create employee ID and another call to LDAP system to create a user account. alternatively client may have performed two fine-grained API calls to achieve the same task. while coarse-grained represents business use-case create user account, fine-grained API represent the capabilities involved in such task. further more fine-grained API may involve different technologies and communication protocols while coarse-grained abstract them into unified flow. when designing a system consider both as again there is no golden approach that solve everything and there is trad-off for each. Coarse-grained are particularly suited as services to be consumed in other Business contexts, such as other applications, line of business or even by other organizations across the own Enterprise boundaries (typical B2B scenarios).
the answer to the original question is SAGA pattern.
Related
(Please let me know if this question belongs to a different stackexchage site)
I have a use case where the clients of my service have to call some APIs exposed in the service. The API specification model being used allows for auto-generation of client bindings for different languages.
I need to provide an enhanced functionality around some of the APIs and that custom code sits around the calls to the API. Instead of expecting every client to write this code on their own, I would like to provide this as a wrapper around the auto-generated client library. I understand that this needs to be done for different languages to be supported (the list if 2-3 in my case).
In general, is this a good choice to make? Are there any other alternatives?
Please let me know if more details are needed.
Generating clients from some kind of API Definition (like OpenAPI) and shipping it with other code is a very common pattern and i've used and created such clients in the past. For the consumers of your API this has some clear advantages: He doesn't have to generate the client (depending on the used technologies is sometimes painful), he benefits from the maintenance done by the provider (and others), and uses the official way to interact with the API.
In such a client there are three main packages which should separated and independent from each other:
The generated client
Code which eases the usage of the generated client
Other Code (i.e. utilities that abstract or simplify common interactions, provide domain logic required on the client side etc)
If these are separated it is very easy for a consumer to pick the pieces he wants to use.
The main disadvantage is that you have to implement and maintain multiple clients for your API. Depending on the size of your API, the supported platforms and environments the clients are used in, this can be a very elaborate task. Also keep in mind that providing a decent client library requires a good understanding of the target platforms and environments. Otherwise your client library might not be accepted by other developers.
In general generated code if often not that "natural" and "nice". For example the generated identifiers might not follow the conventions of the platform or it requires the usage of over complicated constructs like factories to create a simple object. Often the generators can be tweaked, but this adds to the required effort.
All these efforts often add up, so that even big API Providers struggle to provide good client libraries for many platforms.
There are two alternatives:
Only provide a API Definition
Handcraft a client
The first alternative gives the consumer the freedom to choose the way he wants to use your API. But given a good API Definition (which is hard to write), it is relatively easy to do so. In this case it is not possible to provide some additional code to the client. But in general you should aim for dumb clients and avoid clients to perform business logic.
A handcrafted client is best suited if you aim for a limited number of platforms on which you want to provide the best possible experience for the consumers. Further you can implement all kinds of other stuff. But even for a single platform this might be a huge effort.
After reading this great article I thought about migrating our platform to micro-services architecture.
Our stack is Asp.Net Web API (Rest...) on the server.
Angular 2 in the front.
I wanted to make a little proof of concept to check if we should continue down this road.
As for my understanding, I need to take some chunks from our web app and slice it into micro services. As for the beginning, I want to take 2 screens I have, "Users" And "Purchase History" (each of them is too big to be micro service but this is just for the POC) and create each one of them as micro service.
I read that the UI should be part of the microservice, so should I create a new angular two app for each one of them?
If so, should I use rest to call for the rendered HTML?
Frontend and backend API, two services (components) – it’s already some kind of microservice architecture. The questions are how big your components, what kind of logic they have, will you have benefits if you split some logic to different services?
Per microservice architecture every service (component) of your system should have some dedicated logic (domain), solves some related problems, persists data to its own data-store, can be developed and deployed separately. In some cases, data-store can be shared between services.
So, the goal of splitting logic into different services is making your application easier to develop, maintenance, support and understand. Too many small services can bring a lot of overheads. To create a service, you need to spend additional development time, a service is also a deployment item, communication between services has network overheads. So, you should careful consider all pros and cons of splitting logic into services. Some balance should be found.
Going back to the question if "Users" and "Purchase History" are totally different, they don’t have common logic, can be stored in different databases and both are complicated enough, so you can split them into two services. The same about UI parts. The main thing is that splitting should bring you benefits - not overheads.
About using rest - it’s up to you, rest architecture is not required for microservice architecture but very often they are used together. Rest is about design of your services, how they expose API and so on.
The project I am working on is moving from an n-tier to a SOA architecture so I have been reading up on good SOA practices. I'm struggling to understand the dynamic between avoiding RPC style services in favor of event driven services, and the requirement of User Interfaces to retrieve data and do it speedily.
So for instance, ideally a SOA architecture would be composed of repeatable business process wherein you could simply publish a message onto an ESB which would handle finding the services that handle that message. So rather than executing a procedure called "Setup New User" which set out to do all the tasks related to new user setup, you would publish a message into the ESB that just contained the new user's details and had the appropriate document type "New User" and then the ESB would find services that handled that event that would then do whatever domain specific new user provisioning was required.
However, sometimes you just need data. Maybe you have a page that shows some list of user associated data. You can't just fire off a message into the ESB because you need data back and you need it now. Also, you aren't really triggering any business processes; you're just retrieving data from previously invoked business processes (the processes that caused the user to be associated with the data for instance). So to give a concrete example, maybe I just want to see the list of 10 Netflix movies a user has watched recently.
How do you reconcile these disparate types of services in a single SOA system?
In an ESB, where event-driven approach is followed, you have all kinds of listeners, that detect events and act accordingly. These listeners may wait for the appearance of direct messages via some protocol at certain endpoint for example. No matter what the trigger is - a purely business event that starts a business process or a technical call that just needs to retrieve some data, it is still an event that is handled by the ESB. So you are not technically breaking the event-driven approach - it is enforced by your ESB solution. Moreover keep in mind SOA doesn't impose such limitation - you do not have to implement everything in event driven manner.
In your case (provided, you don't have a dedicated BPM solution in place), I'd identify and implement two kinds of services on two purely conceptual layers in the ESB:
Technical services (the event is an incoming direct message for retrieval/modification of data), that can be either called directly by another system (via the ESB) or called by other process services.
Process services on the top (business) layer that are being triggered in a event-driven way (using topic queue for example, where process services listen for their triggering event)
However, this may not be the most optimal approach. I've been discussing business processes in a dedicated business process layer versus process services in the ESB in this topic. Feel free to check it out, because it is kind of related with your question.
I have been given access to a real time data feed which provides location information, and I would like to build a website around this, but I am a little unsure on what architecture to use to achieve my needs.
Unfortunately the feed I have access to will only allow a single connection per IP address, therefore building a website that talks directly to the feed is out - as each user would generate a new request, which would be rejected. It would also be desirable to perform some pre-processing on the data, so I guess I will need some kind of back end which retrieves the data, processes it, then makes it available to a website.
From a front end connection perspective, web services sounds like it may work, but would this also create multiple connections to the feed for each user? I would also like the back end connection to be persistent, so that data is retrieved and processed even when the site is not being visited, I believe IIS will recycle web services and websites when they are idle?
I would like to keep the design fairly flexible - in future I will be adding some mobile clients, so the API needs to support remote connections.
The simple solution would have been to log all the processed data to a database, which could then be picked up by the website, but this loses the real-time aspect of the data. Ideally I would be looking to push the data to the website every time the data changes or now data is received.
What is the best way of achieving this, and what technologies are there out there that may assist here? Comet architecture sounds close to what I need, but that would require building a back end that can handle multiple web based queries at once, which seems like quite a task.
Ideally I would be looking for a C# / ASP.NET based solution with Javascript client side, although I guess this question is more based on architecture and concepts than technological implementations of these.
Thanks in advance for all advice!
Realtime Data Consumer
The simplest solution would seem to be having one component that is dedicated to reading the realtime feed. It could then publish the received data on to a queue (or multiple queues) for consumption by other components within your architecture.
This component (A) would be a standalone process, maybe a service.
Queue consumers
The queue(s) can be read by:
a component (B) dedicated to persisting data for future retrieval or querying. If the amount of data is large you could add more components that read from the persistence queue.
a component (C) that publishes the data directly to any connected subscribers. It could also do some processing, but if you are looking at doing large amounts of processing you may need multiple components that perform this task.
Realtime web technology components (D)
If you are using a .NET stack then it seems like SignalR is getting the most traction. You could also look at XSockets (there are more options in my realtime web tech guide. Just search for '.NET'.
You'll want to use signalR to manage subscriptions and then to publish messages to registered client (PubSub - this SO post seems relevant, maybe you can ask for a bit more info).
You could also look at offloading the PubSub component to a hosted service such as Pusher, who I work for. This will handle managing subscriptions and component C would just need to publish data to an appropriate channel. There are other options all listed in the realtime web tech guide.
All these components come with a JavaScript library.
Summary
Components:
A - .NET service - that publishes info to queue(s)
Queues - MSMQ, NServiceBus etc.
B - Could also be a simple .NET service that reads a queue.
C - this really depends on D since some realtime web technologies will be able to directly integrate. But it could also just be a simple .NET service that reads a queue.
D - Realtime web technology that offers a simple way of routing information to subscribers (PubSub).
If you provide any more info I'll update my answer.
A good solution to this would be something like http://rubyeventmachine.com/ or http://nodejs.org/ . It's not asp.net, but it can easily solve the issue of distributing real time data to other users. Since user connections, subscriptions and broadcasting to channels are built in to each, that will make coding the rest super simple. Your clients would just connect over standard tcp.
If you needed clients to poll for updates then you would need a que system to store info for the next request. That could be a simple array, or a more complicated que system depending on your requirements and number of users.
There may be solutions for .net that I am not aware of that do the same thing, but those are the 2 I know of.
Using a web service is often an excellent architectural approach. And, with the advent of WCF in .Net, it's getting even better.
But, in my experience, some people seem to think that web services should always be used in the data access layer for calls to the database. I don't think that web services are the universal solution.
I am thinking of smaller intranet applications with a few dozen users. The web app and its web service are deployed to one web server, not a web farm. There isn't going to be another web app in the future that can use this particular web service. It seems to me that the cost of calling the web service unnecessarily increases the burden on the web server. There is a performance hit to inter-process calls. Maintaining and debugging the code for the web app and the web service is more complicated. So is deployment. I just don't see the advantages of using a web service here.
One could test this by creating two versions of the web app, with and without the web service, and do stress testing, but I haven't done it.
Do you have an opinion on using web services for small-scale web app's? Any other occasions when web services are not a good architectural choice?
Web Services are an absolutely horrible choice for data access. It's a ton of overhead and complexity for almost zero benefit.
If your app is going to run on one machine, why deny it the ability to do in-process data access calls? I'm not talking about directly accessing the database from your UI code, I'm talking about abstracting your repositories away but still including their assemblies in your running web site.
There are cases where I'd recommend web services (and I'm assuming you mean SOAP) but that's mostly for interoperability.
The granularity of the services is also in question here. A service in the SOA sense will encapsulate an operation or a business process. Data access methods are only part of that process.
In other words:
- someService.SaveOrder(order); // <-- bad
// some other code for shipping, charging, emailing, etc
- someService.FulfillOrder(order); //<-- better
//the service encapsulates the entire process
Web services for the sake of web services is irresponsible programming.
Nick Harrison, a brilliant developer in Charlotte, suggested these scenarios where using a web service makes sense:
On a Web farm, where there are multiple web servers hosting website(s), all pointing to web service(s) running on another web server. This allows for distributing the load over multiple servers.
Client/server, where Windows forms apps can call a web service.
Cross platform
Passing through a firewall
Just because the tool generates a bunch of stubs doesn't mean it's a good use. WS-* excels in scenarios where you expose services to external parties. This means that each operation should be on the granularity of business process as opposed to data access.
The multitude of standards can be used to describe different facets of your contract in great detail and a (hypothetical) fully compliant WS stack can take away a lot of pain from the third party developers and even allow the fabled point and click integration a'la Yahoo Pipes. With good governance controls you can evolve your public interface and manage the backward compatibility as needed.
All this is next to impossible to be generated automatically. The C# stub generator knows only the physical interface of your class, but doesn't have any idea about the semantics involved. See this paper for more detailed discussion.
If you are building a web site, then build a web site. If you want asynchronous messaging inside your application, use MSMQ. If you want to expose data to internal clients, use POX. If you need efficient binary message format, check Google's Protocol Buffers or if you need RPC check Hessian for C# or DCOM.
Web services are a coarse grained integration solution. They are rigid, they are slower than alternatives, they take too much effort to do well (and when not done well are next to pointless).
To summarize: "When should a web service not be used?" - anytime you can get away without it
If you are just coding a tiny (less than 50 users) web application for your intranet, a web service seems overkill. Especially if its primary function (providing a single point of access to many services) won't be used.
I agree that the use of a web service in a small scale web app adds a layer of complexity that does not seem justified. Most of my solutions, internet and intranet, 10-50 users, do not employ web services. I am glad others feel the same...I thought I was the only one.
For a small scale web app I think that using web services is often quite a good idea, you can use it to easily decouple the web server from the data tier. With the straightofrward development requirements and great tooling I don't see the problem.
However don't use web services in the following scenarios:
When you must use Http as the transport and Xml serialization of your data and you need lots of different bits of data, synchronously and often. Whether REST or SOAP or WS-* you're going to suffer performance issues. The more calls you make the slower your system will be. If you want medium size chunks of data less frequently, asynchronously and you can use straight TcpIp (e.g. Wcf netTcpBinding) you'd be better off.
When you need to query and join data from your web service with other data sources, rather motivate for a data warehouse which can be populated with properly consolidated and rationalized data from across the enterprize
This is my experience, hope it helps.
For a small-scale web app (You have to ask the question, "Will it always remain small scale?" though) using web services, separate business layers, data layers, and so on and so forth can be overkill.
Before anyone shoots me, I do agree that separation of logic between layers along with unit tests, continuous integration, et al are bloody brilliant. In my current role I'd be utterly lost and rocking in the corner without them. However for a very small-scale web app being used to, for example, track contact numbers and addresses for a company of 36 employees, the cost/benefit analysis would suggest that all the "niceties" listed above would be overkill.
However... Remember to ask the question "Will it always remain small scale?" :-)