JMS or messaging is really good in tying up disparate applications and form the infrastructure of many ESB and SOA architectures.
However say Application A needs an immediate response from a service on Application B e.g. Needs the provisioning details of an Order OR Needs an immediate confirmation on some update. Is Messaging the right solution for that from a performance point of view? Normally the client would connect to a MoM on a Queue - then a listener which has to be free will pick up the message and forward to the server side processor - which will process the response and send it back to a Queue or Topic and the requesting client will follow the same process and pick it up. If the message size is big the MoM will have to factor that in as well.
Makes me wonder if Http is a better solution to access such solutions instead of going via messaging route? I have seen lots of applications use MoM like AMQ or TIBCO Rvd to actually use for immediate Request/Response - but is that bad design or is it some fine tuning or setting that makes it same as Http.
It really depends on your requirements. Typically messaging services will support one or all of the following:
Guaranteed Delivery
Transactional
Persistent (i.e. messages are persisted until delivered, even if the system fails in the interrim)
An HTTP connection cannot [easilly] implement these attributes, but then again, if you don't need them, then I suppose you could make the case that "simple" HTTP would offer a simpler and more lightweight solution. (Emphasis on the "simple" because some messaging implmentations will operate over HTTP).
I don't think Request/Response implemented over messaging is, in and of itself, bad design. I mean, here's the thing.... are you implementing both sides of this process ? If not, and you already have an established messaging service that will respond to requests, all other considerations aside, that would seem to be the way to go... and bypassing that to reimplement using HTTP because of some desgin notion would need some fairly strong reasoning behind it, in my view.
But the inverse is true as well. If you an HTTP accessible resource already, and you don't have any super stringent requirements that might otherwise suggest a more robust messaging solution, I would not force one in where it's not warranted.
If you are commensing totally tabula-rasa and you must implement both sides from scratch..... then..... post another question here with some details ! :)
Related
We are designing a new system, which will be based on microservices.
I would like to consult, whether it is considered an anti-pattern to produce and consume from the same queue.
The service should be a REST-based microservice.
The backend microservice should handle large-scale operations on IoT devices, which may not always be connected to the internet, the API must be asynchronous and provide robust features such as the number of X retries before final failure, etc.
The API should immediately return a UUID with 201 responses from the POST request and expose 'Get Current Status' (pending, active, done.. etc.) of the operation status of the IoT device, by accepting the UUID in the get request.
We've thought about two solutions (described at a high level) for managing this task:
Implement both API GW Microservice and the logic handler Microservice.
The API GW will expose the REST API methods and will publish messages via RabbitMQ that will be consumed by multiple instances of the logic handler microservice.
The main drawback is that we will need to manage multiple deployments, and keep consistency between the APIs exposed in service A, to the consumer in service B.
Implement one microservice, that exposes the relevant APIs, and in order to handle the scale and the asynchronous operations, will publish the request to own queue over RabbitMQ, being consumed by the same microservice at a background worker.
The second solution looks easier to develop, maintain, and update because all the logic including the REST API's handling will take care of the same microservice.
To some members of the team, this solution looks a bit dirty, and we can't decide whether it is an anti-pattern to consume messages of your own queue.
Any advice will be helpful.
I would like to consult, whether it is considered an anti-pattern to produce and consume from the same queue.
That definitely sounds suspicious, not that I have a lot of experience with queues. But, you also talk about microservices and "producing" & consuming from those - that sounds fine, there's no reason why a microservice (and by extension, it's API) can't do both. But then I'm a bit confused because in reading the rest of the question I don't really see how that issue is a factor.
Having some kind of separation between the API and the microservice sounds wise, because you can change the microservices implementation without affecting callers, assuming it's a non-breaking change. It means you have the ability to solve API / consumer problems, and backend problems, separately.
Version control is just a part of good engineering practice, and probably not an ideal reason to bend your architecture. You should be able to run more than one version in parallel - a lot of API providers operate a N+2 model, where they support the current version, plus the last two (major) releases. That way you give your consumers a reasonable runway for upgrading.
As long as you keep the two concerns modular so you'd be able to separate them when it would make sense it doesn't matter.
I'd think in the longer term you'd probably want to treat them as two aspects of the same service as they'd probably have different update cycle (e.g. the gateway part may need things like auth, maybe additional api in gRPC, etc.),different security reqs (one can accessible to the outside where the other consumes internal resource) different scalability concerns (you'd probably need more resources for the processing) etc.
We are using Micro services architecture where top services are used for exposing REST API's to end user and backend services does the work of querying database.
When we get 1 user request we make ~30k requests to backend service. We are using RxJava for top service so all 30K requests gets executed in parallel.
We are using haproxy to distribute the load between backend services.
However when we get 3-5 user requests we are getting network connection Exceptions, No Route to Host Exception, Socket connection Exception.
What are the best practices for this kind of use case?
Well you ended up with the classical microservice mayhem. It's completely irrelevant what technologies you employ - the problem lays within the way you applied the concept of microservices!
It is natural in this architecture, that services call each other (preferably that should happen asynchronously!!). Since I know only little about your service APIs I'll have to make some assumptions about what went wrong in your backend:
I assume that a user makes a request to one service. This service will now (obviously synchronously) query another service and receive these 30k records you described. Since you probably have to know more about these records you now have to make another request per record to a third service/endpoint to aggregate all the information your frontend requires!
This shows me that you probably got the whole thing with bounded contexts wrong! So much for the analytical part. Now to the solution:
Your API should return all the information along with the query that enumerates them! Sometimes that could seem like a contradiction to the kind of isolation and authority over data/state that the microservices pattern specifies - but it is not feasible to isolate data/state in one service only because that leads to the problem you currently have - all other services HAVE to query that data every time to be able to return correct data to the frontend! However it is possible to duplicate it as long as the authority over the data/state is clear!
Let me illustrate that with an example: Let's assume you have a classical shop system. Articles are grouped. Now you would probably write two microservices - one that handles articles and one that handles groups! And you would be right to do so! You might have already decided that the group-service will hold the relation to the articles assigned to a group! Now if the frontend wants to show all items in a group - what happens: The group service receives the request and returns 30'000 Article numbers in a beautiful JSON array that the frontend receives. This is where it all goes south: The frontend now has to query the article-service for every article it received from the group-service!!! Aaand your're screwed!
Now there are multiple ways to solve this problem: One is (as previously mentioned) to duplicate article information to the group-service: So every time an article is assigned to a group using the group-service, it has to read all the information for that article form the article-service and store it to be able to return it with the get-me-all-the-articles-in-group-x query. This is fairly simple but keep in mind that you will need to update this information when it changes in the article-service or you'll be serving stale data from the group-service. Event-Sourcing can be a very powerful tool in this use case and I suggest you read up on it! You can also use simple messages sent from one service (in this case the article-service) to a message bus of your preference and make the group-service listen and react to these messages.
Another very simple quick-and-dirty solution to your problem could also be just to provide a new REST endpoint on the articles services that takes an array of article-ids and returns the information to all of them which would be much quicker. This could probably solve your problem very quickly.
A good rule of thumb in a backend with microservices is to aspire for a constant number of these cross-service calls which means your number of calls that go across service boundaries should never be directly related to the amount of data that was requested! We closely monitory what service calls are made because of a given request that comes through our API to keep track of what services calls what other services and where our performance bottlenecks will arise or have been caused. Whenever we detect that a service makes many (there is no fixed threshold but everytime I see >4 I start asking questions!) calls to other services we investigate why and how this could be fixed! There are some great metrics tools out there that can help you with tracing requests across service boundaries!
Let me know if this was helpful or not, and whatever solution you implemented!
This is a theoretical question.
imagine an aspnet website. by clicking a button site sends mail.now:
I can send mail async with code
I can send mail using QueueBackgroundWorkItem
I can call a ONEWAY webservice located in same website
I can call a ONEWAY webservice located in ANOTHER website (or another subdomain)
none of above solutions wait for mail operation to be completed.so they are fine.
my question is why I should use service solution instead of other solutions. is there an advantage ?
4th solution adds additional tcpip traffic to use service its not efficient right ?
if so, using service under same web site (3rd solution) also generates additional traffic. is that correct ?
I need to understand why people using services under same website ? Is there any reason besides make something available to ajax calls ?
any information would be great. I really need to get opinions.
best
The most appropriate architecture will depend on several factors:
the volume of emails that needs to be sent
the need to reuse the email sending capability beyond the use case described
the simplicity of implementation, deployment, and maintenance of the code
Separating out the sending of emails in a service either in the same or another web application will make it available to other applications and from client side code. It also adds some complexity to the code calling the service as it will need to deal with the case when the service is not available and handle errors that may occur when placing the call.
Using a separate web application for the service is useful if the volume of emails sent is really large as it allows to offload the work to one or servers if needed. Given the use case given (user clicks on a button), this seems rather unlikely, unless the web site will have really large traffic. Creating a separate web application adds significant development, deployment and maintenance work, initially and over time.
Unless the volume of emails to be sent is really large (millions per day) or there is a need to reuse the email capability in other systems, creating the email sending function within the same web application (first two options listed in the question) is almost certainly the best way to go. It will result in the least amount of initial work, is easy to deploy, and (perhaps most importantly) will be the easiest to maintain.
An important concern to pay significant attention to when implementing an email sending function is the issue of robustness. Robustness can be achieved with any of the possible architectures and is somewhat of an different concern as the one emphasized by the question. However, it is important to consider the proper course of action needed if (1) the receiving SMTP refuses the take the message (e.g., mailbox full; non-existent account; rejection as spam) and (2) an NDR is generated after the message is sent (e.g., rejection as spam). Depending on the kind of email sent, it may be OK to ignore these errors or some corrective action may be needed (e.g., retry sending, alert the user at the origination of the emails, ...)
Our client follows SOA principles and have design web services that are very fine grained like createCustomer, deleteCustomer, etc.
I am not sure if fine grained services are desirable as they create transactional related issues. for e.g. if a business requirement is every Customer must have a Address when it's created. So in this case, the presentation component will invoke createCustomer first and then createAddress. The services internally use simple JDBC to update the respective tables in db. As a service is invoked by external component, it has not way of fulfilling transactional requirement here i.e. if createAddress fails, createCustomer operation must be rolledback.
I guess, one of the approach to deal with this is to either design course grained services (that creates a Customer and associated Address in one single JDBC transaction) or
perhaps simple create a reversing service (deleteCustomer) that simply reverses the action of createCustomer.
any suggestions. thanks
The short answer: services should be designed for the convenience of the service client. If the client is told "call this, then cdon't forget to call that" you're making their lives too difficult. There should be a coarse-grained service.
A long answer: Can a Customer reasonably be entered with no Address? So we call
createCustomer( stuff but no address)
and the result is a valid (if maybe not ideal) state for a customer. Later we call
changeCustomerAddress ( customerId, Address)
and now the persisted customer is more useful.
In this scenario the API is just fine. The key point is that the system's integrity does not depend upon the client code "remembering" to do something, in this case to add the address. However, more likely we don't want a customer in the system without an address in which case I see it as the service's responsibility to ensure that this happens, and to give the caller the fewest possibilities of getting it wrong.
I would see a coarse-grained createCompleteCustomer() method as by far the best way to go - this allows the service provider to solve the problem once rather then require every client programmer to implement the logic.
Alternatives:
a). There are web Services specs for Atomic Transactions and major vendors do support these specs. In principle you could actually implement using fine-grained methods and true transactions. Practically, I think you enter a world of complexity when you go down this route.
b). A stateful interface (work, work, commit) as mentioned by #mtreit. Generally speaking statefulness either adds complexity or obstructs scalability. Where does the service hold the intermediate state? If in memeory, then we require affinity to a particular service instance and hence introduce scaling and reliability problems. If in some State or Work-in-progress database then we have significant additional implementation complexity.
Ok, lets start:
Our client follows SOA principles and
have design web services that are very
fine grained like createCustomer,
deleteCustomer, etc.
No, the client has forgotten to reach the SOA principles and put up what most people do - a morass of badly defined interfaces. For SOA principles, the clinent would have gone to a coarser interface (such asfor example the OData meachsnism to update data) or followed the advice of any book on multi tiered architecture written in like the last 25 years. SOA is just another word for what was invented with CORBA and all the mistakes SOA dudes do today where basically well known design stupidities 10 years ago with CORBA. Not that any of the people doing SOA today has ever heard of CORBA.
I am not sure if fine grained services
are desirable as they create
transactional related issues.
Only for users and platforms not supporting web services. Seriously. Naturally you get transactional issues if you - ignore transactional issues in your programming. The trick here is that people further up the food chain did not, just your client decided to ignore common knowledge (again, see my first remark on Corba).
The people designing web services were well aware of transactional issues, which is why web service specification (WS*) contains actually mechanisms for handling transactional integrity by moving commit operations up to the client calling the web service. The particular spec your client and you should read is WS-Atomic.
If you use the current technology to expose your web service (a.k.a. WCF on the MS platform, similar technologies exist in the java world) then you can expose transaction flow information to the client and let the client handle transaction demarcation. This has its own share iof problems - like clients keeping transactions open maliciously - but is still pretty much the only way to handle transactions that do get defined in the client.
As you give no platform and just mention java, I am pointing you to some MS example how that can look:
http://msdn.microsoft.com/en-us/library/ms752261.aspx
Web services, in general, are a lot more powerfull and a lot more thought out than what most people doing SOA ever think about. Most of the problems they see have been solved a long time ago. But then, SOA is just a buzz word for multi tiered architecture, but most people thinking it is the greatest thing since sliced bread just dont even know what was around 10 years ago.
As your customer I would be a lot more carefull about the performance side. Fine grained non-semantic web services like he defines are a performance hog for non-casual use because the amount of times you cross the network to ask / update small small small small stuff makes the network latency kill you. Creating an order for like 10 goods can easily take 30-40 network calls in this scenario which will really possibly take a lot of time. SOA preaches, ever since the beginning (if you ignore the ramblings of those who dont know history) to NOT use fine grained calls but to go for a coarse grained exchange of documents and / or a semantical approach, much like the OData system.
If transactionality is required, a coarser-grained single operation that can implement transaction-semantics on the server is definitely going to be much simpler to implement.
That said, certainly it is possible to construct some scheme where the target of the operations is not committed until all of the necessary fine-grained operations have succeeded. For instance, have a Commit operation that checks some flag associated with the object on the server; the flag is not set until all of the necessary steps in the transaction have completed, and Commit fails if the flag is not set.
Of course, if having light-weight, fine grained operations is an important design requirement, perhaps the need to have transactionality should be re-thought.
I understand (I think) the basic idea behind RESTful-ness. Use HTTP methods semantically - GET gets, PUT puts, DELETE deletes, etc... Right? thought I understood the idea behind REST, but I think I'm confusing that with the details of an HTTP implementation. What is the driving idea behind rest, why is this becoming an important thing? Have people actually been using it for a long time, in a corner of the internets that my flashlight never shined upon?
The Google talk mentions Atom Publishing Protocols having a lot of synergy with RESTful implementations. Any thoughts on that?
This is what REST might look like:
POST /user
fname=John&lname=Doe&age=25
The server responds:
201 Created
Location: /user/123
In the future, you can then retrieve the user information:
GET /user/123
The server responds (assuming an XML response):
200 OK
<user><fname>John</fname><lname>Doe</lname><age>25</age></user>
To update:
PUT /user/123
fname=Johnny
Here's my view...
The attraction to making RESTful services is that rather than creating web-services with dozens of functional methods, we standardize on four methods (Create,Retrieve, Update, Destroy):
POST
GET
PUT
DELETE
REST is becoming popular because it also represents a standardization of messaging formats at the application layer. While HTTP uses the four basic verbs of REST, the common HTTP message format of HTML isn't a contract for building applications.
The best explanation I've heard is a comparison of TCP/IP to RSS.
Ethernet represents a standardization on the physical network. The Internet Protocol (IP) represents a standardization higher up the stack, and has several different flavors (TCP, UDP, etc). The introduction of the "Transmission Control Protocol" (guaranteed packet delivery) defined communication contracts that opened us up to a whole new set of services (FTP, Gopher, Telnet, HTTP) for the application layer.
In the analogy, we've adopted XML as the "Protocol", we are now beginning to standardize message formats. RSS is quickly becoming the basis for many RESTful services. Google's GData API is a RSS/ATOM variant.
The "desktop gadget" is a great realization of this hype: a simple client can consume basic web-content or complex-mashups using a common API and messaging standard.
HTTP currently is under-used and mis-used.
We usually use only two methods of HTTP: GET and POST, but there are some more: DELETE, PUT, etc (http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html)
So if we have resources, defined by RESTful URLs (each domain object in your application has unique URL in form of http://yoursite.com/path/to/the/resource) and decent HTTP implementation, we can manipulate objects in your domain by writing sentences:
GET http://yoursite.com/path/to/the/resource
DELETE http://yoursite.com/path/to/the/resource
POST http://yoursite.com/path/to/the/resource
etc
the architecture is nice and everything.
but this is just theoretical view, real world scenarios are described in all the links posted in answers before mine.
Lets go to history, Talk about the Roy Fielding Research – “Architectural Styles and the Design of Network-based Software Architectures“. Its a big paper and talks a lot of various stuff. But as a standard engineer How you would like to explain the clear meaning of REST (Representational State Transfer), and what is its Architectural Style.
Here is my way to explain – “What is REST”.
See this www(world wide web) running on top of various hardwares e.g. routers,servers,firewalls, cloud infrastructures,switches,LAN,WAN. The overall objective of this www(world wide web) to distribute hypermedia. This world wide web equipped with various services e.g. informational based services, websites, youtube channels, dynamic websites, static websites. This world wide web uses HTTP protocol to distribute hypermedia across the world with a client/server mechanism. This HTTP Protocol works on top of TCP/IP or other appropriate network stack.
This HTTP protocol is using eight methods to manage the ‘protocol of distribution’ or ‘Architectural Style of Distribution’. Those eight methods are namely : OPTIONS,GET,HEAD,POST,PUT,DELETE,TRACE,CONNECT.
But on Top of this HTTP, web applications are using its own way of distributing hypermedia e.g web applications are using web services which are highly tied with clients and servers ‘or’ web applications are using its own way of designed client/server mechanism to make such distribution channel on top of HTTP.
What Roy Fielding Research says , that these eight methods OPTIONS,GET,HEAD,POST,PUT,DELETE,TRACE,CONNECT of HTTP are so successful to deliver HyperMedia to all across the world on top of variety of hardware resources and network stacks with client/server mechanism, Why don’t we use the similar strategy with our web based application as well. On this GET,POST,DELETE and PUT are used the most. so four methods deliver HyperMedia to all across the world.
In REST API Architecture Style application, a web application need to design the business logic(resides in a server e.g. Tomcat,Apache HTTP) with all set of object entities(e.g. Customer is an entity) and possible operations(e.g.‘Retrieve Customer Information based on a customer id’) on them. Those possible operations with these entities should be designed with four main operations or methods namely- Create,Retrieve,Update,Delete. These entities called as resources and these are presented or represented in a form e.g. JSON or XML or something else. We have Client(Browsers) who calls Create,Retrieve,Update,Delete (CRUD) methods to perform the appropriate function on such resource resides in the Server.
But as explained the concept of Representation, means the way entities of business logic or objects are represented. but what about with ‘State Transfer’ ?.
The State Transfer, its talks about the “state of communication” from Client to Server. It talks about the design of ‘state transfers’ from Client to Server e.g. Client first called the operation ‘Create Customer’, after calling this what would be next state of customer or states of customer which ‘client’ can call. Its state may be to ‘retrieve the created client data’, ‘update the client data’ or what
REST is an architecture where resources are defined and addressed.
To understand REST best, you should look at the Resource Oriented Architecture (ROA) which gives a set of guidelines for when actually implementing the REST architecture.
REST does not need to be over HTTP, but it is the most common. REST was first created by one of the creators of HTTP though.