Should there be authentication/authorization between microservices? [closed] - soa

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I know this may be not a good question.
I was asked a question: do we really need authentication among microservices. And I have no idea the answer. I did read some tutorials on SOA, microservices, and how to add authentication among the services. But I did not have too many ideas why we need authentication/authorization between microservices? Any use cases where they are required? Any use cases where they are not required? Any potential risk without authentication/authorization?
Any comments welcomed. It is better to give some practical examples. Thanks

Whether a microservice that you design and develop requires authentication is up to your functional requirements and the way you design it.
A common technique used is to not have authentication on each individual microservice but to group them together behind a common fascade (such as an API Manager). You can then apply authentication and other policies at one place - the policy enforcement point/API Manager - for "external" consumers while "internally", behind your common security boundary, your microservices remain lightweight and can call each other without authentication (if that makes sense for your usecase/requirements/architecture etc. etc.)
To sum up - it's a design decision that involves multiple tradeoffs.
Clearly, if you have a critical business service fetching or updating sensitive data, you might want only authorised callers to access it. But you might not want many internal callers (could be other microservices) running within your organisation's "trusted" network to be burdened with unnecessary policy enforcement.
But then, there might be situations where even internal callers need to authenticate properly (e.g. if it is a payment service)

Authentication/authorization in most cases is needed for microservices that provide public API, as they are available/visible for the World.
Why? Cause when someone from the World calls the API method, we (in most cases) want to know who the client is (do Authentication) and decide what client is allowed to do (do Authorization).
On the other hand, for internal microservices (in most cases) the client's are well-known as they are other internal microservices. So until you don't need to provide different restrictions of use for different internal microservices there is no need for authorization. Note that I assume that internal components only available within the organization.

If your organization is considered with internal threats (and why wouldn't they be?), then yes all microservices need to be protected from malicious use.

Related

Orchestrating microservices

What is the standard pattern of orchestrating microservices?
If a microservice only knows about its own domain, but there is a flow of data that requires that multiple services interact in some manner, what's the way to go about it?
Let's say we have something like this:
Invoicing
Shipment
And for the sake of the argument, let's say that once an order has been shipped, the invoice should be created.
Somewhere, someone presses a button in a GUI, "I'm done, let's do this!"
In a classic monolith service architecture, I'd say that there is either an ESB handling this, or the Shipment service has knowledge of the invoice service and just calls that.
But what is the way people deal with this in this brave new world of microservices?
I do get that this could be considered highly opinion-based. but there is a concrete side to it, as microservices are not supposed to do the above.
So there has to be a "what should it by definition do instead", which is not opinion-based.
Shoot.
The Book Building Microservices describes in detail the styles mentioned by #RogerAlsing in his answer.
On page 43 under Orchestration vs Choreography the book says:
As we start to model more and more complex logic, we have to deal with
the problem of managing business processes that stretch across the
boundary of individual services. And with microservices, we’ll hit
this limit sooner than usual. [...] When it comes to actually
implementing this flow, there are two styles of architecture we could
follow. With orchestration, we rely on a central brain to guide and
drive the process, much like the conductor in an orchestra. With
choreography, we inform each part of the system of its job and let it
work out the details, like dancers all find‐ ing their way and
reacting to others around them in a ballet.
The book then proceeds to explain the two styles. The orchestration style corresponds more to the SOA idea of orchestration/task services, whereas the choreography style corresponds to the dumb pipes and smart endpoints mentioned in Martin Fowler's article.
Orchestration Style
Under this style, the book above mentions:
Let’s think about what an orchestration solution would look like for
this flow. Here, probably the simplest thing to do would be to have
our customer service act as the central brain. On creation, it talks
to the loyalty points bank, email service, and postal service [...],
through a series of request/response calls. The
customer service itself can then track where a customer is in this
process. It can check to see if the customer’s account has been set
up, or the email sent, or the post delivered. We get to take the
flowchart [...] and model it directly into code. We could even use
tooling that implements this for us, perhaps using an appropriate
rules engine. Commercial tools exist for this very purpose in the form
of business process modeling software. Assuming we use synchronous
request/response, we could even know if each stage has worked [...]
The downside to this orchestration approach is that the customer
service can become too much of a central governing authority. It can
become the hub in the middle of a web and a central point where logic
starts to live. I have seen this approach result in a small number of
smart “god” services telling anemic CRUD-based services what to do.
Note: I suppose that when the author mentions tooling he's referring to something like BPM (e.g. Activity, Apache ODE, Camunda). As a matter of fact, the Workflow Patterns Website has an awesome set of patterns to do this kind of orchestration and it also offers evaluation details of different vendor tools that help to implement it this way. I don't think the author implies one is required to use one of these tools to implement this style of integration though, other lightweight orchestration frameworks could be used e.g. Spring Integration, Apache Camel or Mule ESB
However, other books I've read on the topic of Microservices and in general the majority of articles I've found in the web seem to disfavor this approach of orchestration and instead suggest using the next one.
Choreography Style
Under choreography style the author says:
With a choreographed approach, we could instead just have the customer
service emit an event in an asynchronous manner, saying Customer
created. The email service, postal service, and loyalty points bank
then just subscribe to these events and react accordingly [...]
This approach is significantly more decoupled. If some
other service needed to reach to the creation of a customer, it just
needs to subscribe to the events and do its job when needed. The
downside is that the explicit view of the business process we see in
[the workflow] is now only implicitly reflected in our system [...]
This means additional work is needed to ensure that you can monitor
and track that the right things have happened. For example, would you
know if the loyalty points bank had a bug and for some reason didn’t
set up the correct account? One approach I like for dealing with this
is to build a monitoring system that explicitly matches the view of
the business process in [the workflow], but then tracks what each of
the services do as independent entities, letting you see odd
exceptions mapped onto the more explicit process flow. The [flowchart]
[...] isn’t the driving force, but just one lens through
which we can see how the system is behaving. In general, I have found
that systems that tend more toward the choreographed approach are more
loosely coupled, and are more flexible and amenable to change. You do
need to do extra work to monitor and track the processes across system
boundaries, however. I have found most heavily orchestrated
implementations to be extremely brittle, with a higher cost of change.
With that in mind, I strongly prefer aiming for a choreographed
system, where each service is smart enough to understand its role in
the whole dance.
Note: To this day I'm still not sure if choreography is just another name for event-driven architecture (EDA), but if EDA is just one way to do it, what are the other ways? (Also see What do you mean by "Event-Driven"? and The Meanings of Event-Driven Architecture). Also, it seems that things like CQRS and EventSourcing resonate a lot with this architectural style, right?
Now, after this comes the fun. The Microservices book does not assume microservices are going to be implemented with REST. As a matter of fact in the next section in the book, they proceed to consider RPC and SOA-based solutions and finally REST. An important point here is that Microservices does not imply REST.
So, What About HATEOAS? (Hypermedia as the Engine of Application State)
Now, if we want to follow the RESTful approach we cannot ignore HATEOAS or Roy Fielding will be very much pleased to say in his blog that our solution is not truly REST. See his blog post on REST API Must be Hypertext Driven:
I am getting frustrated by the number of people calling any HTTP-based
interface a REST API. What needs to be done to make the REST
architectural style clear on the notion that hypertext is a
constraint? In other words, if the engine of application state (and
hence the API) is not being driven by hypertext, then it cannot be
RESTful and cannot be a REST API. Period. Is there some broken manual
somewhere that needs to be fixed?
So, as you can see, Fielding thinks that without HATEOAS you are not truly building RESTful applications. For Fielding, HATEOAS is the way to go when it comes to orchestrating services. I am just learning all this, but to me, HATEOAS does not clearly define who or what is the driving force behind actually following the links. In a UI that could be the user, but in computer-to-computer interactions, I suppose that needs to be done by a higher level service.
According to HATEOAS, the only link the API consumer truly needs to know is the one that initiates the communication with the server (e.g. POST /order). From this point on, REST is going to conduct the flow, because, in the response of this endpoint, the resource returned will contain the links to the next possible states. The API consumer then decides what link to follow and move the application to the next state.
Despite how cool that sounds, the client still needs to know if the link must be POSTed, PUTed, GETed, PATCHed, etc. And the client still needs to decide what payload to pass. The client still needs to be aware of what to do if that fails (retry, compensate, cancel, etc.).
I am fairly new to all this, but for me, from HATEOAs perspective, this client, or API consumer is a high order service. If we think it from the perspective of a human, you can imagine an end-user on a web page, deciding what links to follow, but still, the programmer of the web page had to decide what method to use to invoke the links, and what payload to pass. So, to my point, in a computer-to-computer interaction, the computer takes the role of the end-user. Once more this is what we call an orchestrations service.
I suppose we can use HATEOAS with either orchestration or choreography.
The API Gateway Pattern
Another interesting pattern is suggested by Chris Richardson who also proposed what he called an API Gateway Pattern.
In a monolithic architecture, clients of the application, such as web
browsers and native applications, make HTTP requests via a load
balancer to one of N identical instances of the application. But in a
microservice architecture, the monolith has been replaced by a
collection of services. Consequently, a key question we need to answer
is what do the clients interact with?
An application client, such as a native mobile application, could make
RESTful HTTP requests to the individual services [...] On the surface
this might seem attractive. However, there is likely to be a
significant mismatch in granularity between the APIs of the individual
services and data required by the clients. For example, displaying one
web page could potentially require calls to large numbers of services.
Amazon.com, for example,
describes how some
pages require calls to 100+ services. Making that many requests, even
over a high-speed internet connection, let alone a lower-bandwidth,
higher-latency mobile network, would be very inefficient and result in
a poor user experience.
A much better approach is for clients to make a small number of
requests per-page, perhaps as few as one, over the Internet to a
front-end server known as an API gateway.
The API gateway sits between the application’s clients and the
microservices. It provides APIs that are tailored to the client. The
API gateway provides a coarse-grained API to mobile clients and a
finer-grained API to desktop clients that use a high-performance
network. In this example, the desktop clients make multiple requests
to retrieve information about a product, whereas a mobile client
makes a single request.
The API gateway handles incoming requests by making requests to some
number of microservices over the high-performance LAN. Netflix, for
example,
describes
how each request fans out to on average six backend services. In this
example, fine-grained requests from a desktop client are simply
proxied to the corresponding service, whereas each coarse-grained
request from a mobile client is handled by aggregating the results of
calling multiple services.
Not only does the API gateway optimize communication between clients
and the application, but it also encapsulates the details of the
microservices. This enables the microservices to evolve without
impacting the clients. For example, two microservices might be
merged. Another microservice might be partitioned into two or more
services. Only the API gateway needs to be updated to reflect these
changes. The clients are unaffected.
Now that we have looked at how the API gateway mediates between the
application and its clients, let’s now look at how to implement
communication between microservices.
This sounds pretty similar to the orchestration style mentioned above, just with a slightly different intent, in this case, it seems to be all about performance and simplification of interactions.
Trying to aggregate the different approaches here.
Domain Events
The dominant approach for this seems to be using domain events, where each service publish events regarding what have happened and other services can subscribe to those events.
This seems to go hand in hand with the concept of smart endpoints, dumb pipes that is described by Martin Fowler here: http://martinfowler.com/articles/microservices.html#SmartEndpointsAndDumbPipes
Proxy
Another apporach that seems common is to wrap the business flow in its own service.
Where the proxy orchestrates the interaction between the microservices like shown in the below picture:
.
Other patterns of the composition
This page contains various composition patterns.
So, how is orchestration of microservices different from orchestration of old SOA services that are not “micro”? Not much at all.
Microservices usually communicate using http (REST) or messaging/events. Orchestration is often associated with orchestration platforms that allow you to create a scripted interaction among services to automate workflows. In the old SOA days, these platforms used WS-BPEL. Today's tools don't use BPEL. Examples of modern orchestration products: Netflix Conductor, Camunda, Zeebe, Azure Logic Apps, Baker.
Keep in mind that orchestration is a compound pattern that offers several capabilities to create complex compositions of services. Microservices are more often seen as services that should not participate in complex compositions and rather be more autonomous.
I can see a microservice being invoked in an orchestrated workflow to do some simple processing, but I don’t see a microservice being the orchestrator service, which often uses mechanisms such as compensating transactions and state repository (dehydration).
So you're having two services:
Invoice micro service
Shipment micro service
In real life, you would have something where you hold the order state. Let's call it order service. Next you have order processing use cases, which know what to do when the order transitions from one state to another. All these services contain a certain set of data, and now you need something else, that does all the coordination. This might be:
A simple GUI knowing all your services and implementing the use cases ("I'm done" calls the shipment service)
A business process engine, which waits for an "I'm done" event. This engine implements the use cases and the flow.
An orchestration micro service, let's say the order processing service itself that knows the flow/use cases of your domain
Anything else I did not think about yet
The main point with this is that the control is external. This is because all your application components are individual building blocks, loosely coupled. If your use cases change, you have to alter one component in one place, which is the orchestration component. If you add a different order flow, you can easily add another orchestrator that does not interfere with the first one. The micro service thinking is not only about scalability and doing fancy REST API's but also about a clear structure, reduced dependencies between components and reuse of common data and functionality that are shared throughout your business.
HTH, Mark
If the State needs to be managed then the Event Sourcing with CQRS is the ideal way of communication. Else, an Asynchronous messaging system (AMQP) can be used for inter microservice communication.
From your question, it is clear that the ES with CQRS should be the right mix. If using java, take a look at Axon framework. Or build a custom solution using Kafka or RabbitMQ.
You can implement orchestration by using spring State machine model.
Steps
Add below dependency to your project ( if you are using Maven)
<dependency>
<groupId>org.springframework.statemachine</groupId>
<artifactId>spring-statemachine-core</artifactId>
<version>2.2.0.RELEASE</version>
</dependency>
Define states and events e.g. State 1, State 2 and Event 1 and Event 2
Provide state machine implementation in buildMachine() method.
configureStates
configureTransitions
Send events to state machine
Refer to documentation page for complete code
i have written few posts on this topic:
Maybe these posts can also help:
API Gateway pattern - Course-grained api vs fine-grained apis
https://www.linkedin.com/pulse/api-gateway-pattern-ronen-hamias/
https://www.linkedin.com/pulse/successfulapi-ronen-hamias/
Coarse-grained vs Fine-grained service API
By definition a coarse-grained service operation has broader scope than a fine-grained service, although the terms are relative. coarse-grained increased design complexity but can reduce the number of calls required to complete a task. at micro-services architecture coarse-grained may reside at the API Gateway layer and orchestrate several micro-services to complete specific business operation. coarse-grained APIs needs to be carefully designed as involving several micro-services that managing different domain of expertise has a risk to mix-concerns in single API and breaking the rules described above. coarse-grained APIs may suggest new level of granularity for business functions that where not exist otherwise. for example hire employee may involve two microservices calls to HR system to create employee ID and another call to LDAP system to create a user account. alternatively client may have performed two fine-grained API calls to achieve the same task. while coarse-grained represents business use-case create user account, fine-grained API represent the capabilities involved in such task. further more fine-grained API may involve different technologies and communication protocols while coarse-grained abstract them into unified flow. when designing a system consider both as again there is no golden approach that solve everything and there is trad-off for each. Coarse-grained are particularly suited as services to be consumed in other Business contexts, such as other applications, line of business or even by other organizations across the own Enterprise boundaries (typical B2B scenarios).
the answer to the original question is SAGA pattern.

how much of a web site security along with server security? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I write a phrase and please you say your point of view about it:
For my web site,If My Server Is Secure(Server Admin warranty that) and I prevent XSS and Sql Injenction attack,Is my web site secure?
(please leave your answer with reference)
Thanks
Edit 1 ::
every of above items + Cross-site request forgery
Firstly, don't expect to obtain a "secure" end state as if it's an absolute position - you can't. Software security is about reducing risk and you won't ever reach a position of no risk.
There are many, many other risks you've missed: broken authentication and session management, insecure direct object references, security misconfiguration, insecure cryptographic storage, failure to restrict URL access, insufficient transport layer protection and unvalidated redirects and forwards to name a few. These are all out of the OWASP Top 10 and I suggest you start with these.
Make sure you understand:
The risk
How it's exploited
How you can protect against it
If you'd like to see all this in the context of ASP.NET have a read through the OWASP Top 10 for .NET developers series.
And thee are risks beyond these top 10 too, they're just the most common ones in web apps.
No. There are other attack methods. For example, http://en.wikipedia.org/wiki/Cross-site_request_forgery

Create an scalable ASP.Net MVC web site without using Session [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm working on a Web project using Asp.Net MVC, which I'll have to deploy to a farm environment.
I've read a lot of articles and I'm thinking on disabling completely the SessionState, I think this would make a more robust application, and will save me a couple of headaches (Everything I've read tells me that handling sessions on a farm isn't trivial).
There are some things that I still don't have totally clear with this approach though, the main one being the authentication/authorization process. Basically I'm not sure of how (if?) I can handle user sessions if there's no SessionState enabled on the server.
If a user logs into the web site and then tries to access another page, how can I know that the user is already logged in?
I know using cookies is insecure, I thought of a mix of cookies with the session Id stored in the DB, but I suppose that if I disable SessionState I won't have access to the session id either.
What's the best approach on this? Is there any recommended book/article you can point me to so I can get this clear?
Thanks a lot for your help
I think use Forms Authentication for this this will be manage your logged in user name and you can also set authorization through this.
http://msdn.microsoft.com/en-us/library/ff647070.aspx
http://msdn.microsoft.com/en-us/library/xdt4thhy.aspx
http://www.codeproject.com/KB/web-security/formsroleauth.aspx
http://www.beansoftware.com/ASP.NET-Tutorials/Forms-Authentication-Active-Directory.aspx
These links are ans of your each question. Through this you can manage role authorization and session
It may be possible that if there are workflows that your application supports that you want to persist over application recycling (cluster node failure) you may be able to ignore the persistent Session complexities completely.
Consider an ecommerce checkout example or a similar multi-step process that requires a lot of state management before completion. The suggestion is to design the model of your application is such a way that during these "steps" the progress of the workslow is persisted to the data store natively though the model. That is, the "workfolow" is not some sort of externality to the main model of your app and as such treated as a "temporary" thing that requires some persistence mechanism like the aspnet Session as opposed to the apps regular data store (database).
For example, rather than storing the checkout object tree (lists of items, orders, etc) in Session, persist it to the database itself. This way not only does that "partially completed checkout" survive node failure or application recycling but additionally, if that user has to go put out an urgent fire in the kitchen or their Windows Update crashes their PC, they can resume when they log in next. :D
And: you avoid all that complex distributed session management stuff. Yuk!
I know this answer extends further than just the authentication point that the question actually highlights, but this is good to know and certainly a problem clustered/farm environments place on aspnet apps.
Hanselman on clustered mvc apps: http://www.hanselman.com/blog/LoadBalancingAndASPNET.aspx
FormsAuthentication and the ASP.NET user profile work without SessionState enabled--they run on cookies and database lookups by default.
For shopping cart-type scenarios I'd strongly consider just saving the data in the database and tagging users -- it lets people come back and grab abandoned carts.
What can and will break without the SessionState enabled is MVC's TempData -- it stashes things in the session between pages. But, if you just avoid using it, you are golden.

Which .net architecture should I implement for 10,000 concurrent users for web application [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I need to create a web application for tasks delegation, monitor and generate reports for a organization.
I am thinking about ASP.Net and MVC should be the architecture to support so many concurrent users. Is there any other better architecture that I should be looking for ?
What kind of server configuration will require to hold this application ?
How do I test so many users connected before I launch this application, is there any free/economical testing tools available ?
Thanks in advance.
anil
the choice of MVC versus webforms have little/nothing to do with the ability for the app to handle load. Your problems will be reading/writing to the DB, and that doesn't change no matter which of the two you choose.
ideas for improving ability to handle load:
first and foremost: absolute minimum is two servers: web server and DB server, they should NEVER run on the same box.
DB:
Efficient queries towards the DB, indexes in the DB, denormalizing tables that are hit a lot, CACHE, CACHE CACHE, running the DB in a cluster, oh, and did I mention CACHING?
Processing:
if you need heavy processing, do this in web services that can run on separate machines from the web servers so that you can scale out (buy more servers and put them behind a load balancer if needed)
WEB:
avoid the need for server affinity (so that it doesn't matter which web server serves a given user at any given time) this means using DB or StateServer to store sessions, syncing the MachineKey on the servers.
the decision of using MVC or not have NO impact on the ability to handle 10k concurrent users, however it's a HUGE benefit to use MVC if you want the site to be unit-testable
remember: Applications are either testable or detestable, your choice
Cache Cache Cache Cache :-) a smart caching policy will make even one server go a long way ... aside from that, you will need to find out where your bottleneck will be. If your application is database heavy, then you will need to consider scaling your database either by clustering, or sharding. If you expect your web server to be the bottleneck (for example if you are doing a lot of processing, like image processing or something), then you can put a load balancer to distribute requests between N number of servers in your webfarm.
For a setup this large I would highly recommend using a Distributed memory caching provider to be layered above your database. I would also really suggest using an ORM that has built in support for the memory cache, like NHibernate since in an application of this scale your biggest bottleneck will definitely be your database.
You will most likely need a webfarm for this scenario, if a single server is strong enough for it currently at some point in the near future you will most likely out grow a single box which is why it's important to architect in the distributed cache first so you can grow your farm out and not have to re-architect your entire system.

security for web service with many methods

I am planning to write a .net web application using SOA, which means data operations are made using web methods. There will be many, many methods so I got the next questions:
how should i handle security?
should i split them into more services?
call them using reflection?
Any tips will help because i am new to SOA..
I would suggest you use WCF instead of .Net web-services. WCF gives you a lot of flexibility regarding security and many more aspects. Especially: SOA does not equal web-services. With WCF you can configure the channel your data is sent over (i.e. HTTP, TCP, MSMQ, etc.).
Regarding Reflection, I see no reason to use it. Reflection is slow, hard to debug and not really related to SOA at all. Debugging SOA's is challenging enough, so use reflection sparingly.
As you can imagine, that's not a simple subject. So I would partition it this way: minimally, your question comprises two aspects of security:
Authentication: knowing who your calling party is
Authorization: knowing what that calling part is allowed to do
You have different options for both. For ex. you can handle authentication through multiple standards like WS-{Security|Trust|etc} and, in the other end, authorization through AzMan roles (which BTW doesn't scale very well).
With respect to technology, I agree with other posts, you should opt for WCF. That allows you to leverage those standards and present you more options for the different aspects of security, including auditing.

Resources