I'm dealing with an architecture approach that doesn't convince me deeply, but I can't figure out a good alternative.
Basically our company is pretty much always under attack... I mean there's constant attempt of phishing, spoofing, bruteforcing, etc. attacks to our public web site.
Decision has been taken to delpoy the web apps on a 3 tier configuration (I really mean tier, not layer as often they're confused), letting the presentation tier being on a machine, in a subnet exposed to the internet, and the rest of the application being on an internal subnet, with an application server that basically expose everything needed by the presentation server or other internal system, and obvioulsy a database server.
Now, the separation between presentation and business logic, is very heavy both in terms of development and in terms of performance: use a DLL dedicated to perform data access is obviuosly faster than call a remote service to perform the same operation.
This, on the other, has been seen as the most pratical way to improve security: in this deployment model the presentation layer could be break throug by some hacker, but still, from there, they gain no access to the database, and that's the important part.
The communication between the presentation tier and the business tier is made through WCF services, exposing authenticated SOAP methods.
Moreover the approach is to let all the business logic in the BL tier, letting the presentation tier be quite "stupid": so the BL tier is not merely a secured DAL layer, but contains all the logic.
Is that an acceptable approach? There's some smarter way to handle such scenario out there, that we're missing? Would you prefer to keep BL on the presentiation tier, living just the DAL in the middle tier? And for what reason?
I'm feeling pretty unsure about our solution, so I'm asking for any advice.
Related
I study SOA and webservices for a science paper. My state of knowledge is, that every SOA architcture needs a service broker.
Webservices are concrete implementations of a SOA, so do they have a service broker after? For example I create a webservice in asp.net which returns "hallo world" By Creating it, do I create a service broker too?
Don't let fool you by answers which are copy paste from Wikipedia :-)
Webservices are concrete implementations of a SOA
This assumption/statement is wrong. At least there is no direct relationship between SOA and webservices. SOA is an architectural paradigm where a webservice is a concrete technology (stack) based on WSDL and its result, the SOAP-protocol. Nothing more. Webservices may help to establish loosely coupled service landscape, which the SOA paradigm expects. But you could also build up a SOA landscape with other technology stacks (self-written hacks, RMI, even based on REST for instance).
Repository
The thing is: When you start building up your SOA-landscape, you (or others) will code services (i.e. webservices) where your service will have a technical contract (WSDL, WADL, ..) as a base for the implementation. Your clients will ask for it and you want it to store somewhere. This somewhere is usually a service repository. You could develop your own one, use the UDDI-standard or just buy one of products by the big vendors (IBM, TIBCO, Oracle etc).
Broker
A message broker within the SOA context is some piece of software, which supports the decoupling of the connected partner systems. Commonly it's called ESB (enterprise service bus). Also one of the goals of the SOA paradigm is, that the services can be used by anyone (reusability). Therefore you don't want to connect your services by P2P-connections (aka spaghetti architecture) - just imagine that one of the service participants changes it's hardware/IP: this would be a nightmare for all the connected partner systems. That's why the ESB was invented which acts between the service consumer and the service provider.
Typically, these ESB-products support a lot of technologies or -stacks/APIs like HTTP, JMS, REST etc.
Source: I work with a self-claimed SOA landscape and thousands of different (web-)services for a big company for a long time now.
A Web service is a set of related application functions that can be programmatically invoked over the Internet. Businesses can dynamically mix and match Web services to perform complex transactions with minimal programming. Web services allow buyers and sellers all over the world to discover each other, connect dynamically, and execute transactions in real time with minimal human interaction.
Web services are self-contained, self-describing modular applications that can be published, located, and invoked across the Web.
A network component in a Web Services architecture can play one or
more fundamental roles: service provider, service broker, and service
client.
Service brokers register and categorize published services and provide search services. For example, UDDI acts as a service broker for WSDL-described Web services.
they have many common features. but how the difference?
MOM allow asynchronous while SOA does not, this is the only difference?
SOA, Service Oriented Architecture, is an architecture that defines how to structure access to business information between different applications. In a nutshell, usually, one application needs something done with a piece of information (may it be an orderfile or anything else) that application has a need. Another application may be able to do the corresponding processing of that piece of information, hence it has a capability. The first application then Consumes the Service of the second application, which Provides the Service (no matter the underlying technology, which can be anything such as JMS, HTTP/SOAP, HTTP/REST, EMail, FTP, etc.). To make this work, a Contract between the first application and the Service has to be defined which clears such things out as Message Format (XSD or similar), Protocol (HTTP/SOAP? JMS?) etc.
MOM, Message Oriented Middleware, on the other hand is just a family of software/middleware platforms. They are actual implementations, and not a high-level concept like SOA. They can be used to implement a SOA architecture, an Event Driven architecture or other architectures. Usually, MOM enriches a set of applications with asynchronous messaging where a MOM server stores and forwards the messages. Often things such as transactions, guranteed delivery, fail-over, loose coupling and load balancing are built into MOM implementations. Examples of MOM are IBM WebSphere MQ, Apache ActiveMQ, RabbitMQ, JBoss HornetQ, etc.
Message oriented middleware (MOM) is a type of technology where as SOA is a type of architecture. Even though a lot of people think about web-service when they talk about SOA, you can use MOM to implement it as well (in fact in many cases that's the better option)
I am analysing various technologies in asp.net for better usage to have all it benefits(such as code reuse, avoiding time consumption by knowing technology usage etc) . currently working with wcf implementation . where will i be using it more effectively ? any suggestions would be appreciated
thank you
You'll be using WCF in scenarios where you need to transport data between logical layers in different physical layers.
For example, a client-server application that should stream data from the server to the client and viceversa.
Or a web application that exposes a Web Service API.
It's all about simplifying socket programming over TCP, UDP and other protocols on top of these like HTTP and SOAP.
If you need a networked solution, WCF is one of best ways of easly acquire good results in less time, and gain a configurable and easy to deploy, easy to host n-tier program.
WCF is a programming model that allows the developer to create distributed systems that follow SOA.
If you are looking at a system which needs to be loosely coupled and if you as a developer want to control the flow of data between layers to a great extent then a WCF service is the one for you.
take a look at Wrox's Professional WCF Programming if you could. you will get a good know how
`Dinesh
Is an Enterprise Service Bus (a tool that acts as a mediator, a message broker, a service enabler, schema transformation enhancer, transparent location provider, service aggregator, load balancer, monitor, and all that stuff) responsible to orchestrate services?
What about putting an automated business business process with more than thousand steps and dozens of service invocations inside your enterprise service bus?
Would you do it, or would you use a specialist in orchestration such as a BPEL engine?
Please gimme you opinion.
Yes and no. There's a thin, and sometimes indistinguishable line between orchestration and aggregation/service augmentation.
In general, if you've got any long-running or complex business process (process being the key word, although I'm going to avoid defining it) - that's best suited to BPEL.
Simple tasks, such as aggregating the results of three service calls, could and often should be done in an ESB layer.
It's not worth losing too much sleep over, though
Disclaimer: I am an IBM ESB consultant, although I'm not writing this in an official capacity.
No, an ESB's responsibility is not the orchestration of services (per se).
The ESB provides a layer of abstraction at the "software infrastructure level".
This means that an ESB is a "single logical abstract port of call for connectivity" with any service that is published on the bus.
The ESB being abstract, means that consumers of services on the bus, don't "need to know" deployment details of the service, and it is possible to expose "internally facing services" with a single document model. The ESB provides low level services (such as protocol translation and message transformation), so that internally services can communicate in a simplified fashion.
This implies some orchestration: The ESB provides orchestration of the afore mentioned low level services (e.g. when service X is called via IIOP, translate this to SOAP with Attachments. Then transform the request from whatever serialized data to an XML payload).
The orchestration you would typically avoid in an ESB is: In order to process this (insurance) sale, we first need to validate the information provided by the buyer, then we need to underwrite the risk of insuring, and finally calculate the premium that needs to be paid for the insurance, after which we need to… etc.
The steps described above are clearly a business process (which could even be interrupted… e.g. if automatic underwriting is not possible, then a human underwriter needs to further assess the risk).
Business Services (e.g. Validation, Underwriting, Premium Calculation) that make up a Business Process (e.g. Insurance Sale), which is what is typically referred to as Orchestration, is best suited to happen in a Business Process Engine and defined using a formalized Business Process Modeling Language (such as BPEL).
Also making a guess about the many steps in your process: In the above example, Validation is a (course grained) service. The validation rules themselves are internal to that service. For complex business rules (i.e. not business process), the use of a Business Rules Engine may be required.
My short quick answer is NO, that not its responsability.
I would rather let that to the BPEL or a BPM suite.
Mhh I don't know what else to add :) ... Good luck?
Now my own vision.
Regarding all the work an ESB has to do, putting service orchestration inside the main infrastructure element of your SOA is not a good idea.
Aggregate, ok! But keeping your communication channel busy with business logic will, for sure, cause a terrible impact in the ability to delivery other features.
After all, most ESBs such as as BEA Aqualogic Service have a limited support for orchestration including lack of stateful capabilities, and activities like wait (a timer) or pick (wait for some input to move on the process), split/join capabilities (already added on ALSB 3.0), and so on.
No way. Just use tools like a BPEL engine or a tool like Weblogic Integration.
Thanks.
Whenever you have two or more services that interact use service orchestrator, i.e. for composition and process control services. If you have esb expose this composition service on esb. Now if you have to compose new service that includes this composition service use orchestrator and again expose on esb.
Use esb as service delivery mechanism and web service broker and proxy. In composing a service orchestrator will use esb to reach interacting services. If these interacting services use incompatible xml schemas esb can transform/map them to common schema in runtime and route service requests based on the content, e.g. namespace.
Yes orchestration is a responsibility, in most cases, of the ESB. Or, alternatively, if you draw a line between ESB infra and orchestration infra, then you are doing so on a physical level for performance reasons, not for logical attribution of responsibility.
You have 2 choices - when, for example, an HR system receives a new employee - where do you place the business logic that says "the compliance department will need to approve and check first, and then if that's ok, the HR department will need to finalise the hire, then the accounting department will need a new entry, and then the payroll system will need updating, and if that fails, then we'll need to send an email to HR"? If all business processes are considered 'owned' by the initiating dept/application, then the overall system that is the enterprise becomes complex, with disparate orchestration systems.
The second choice is centralise the orchestration, essentially making it a logical partner of the messaging platform. If you choose to see these as separate artifacts, that is up to you, but it is equally valid to described both as ESB.
An Enterprise Service Bus should never be responsible for orchestrating services.
Orchestration implies a minimum of "smarts", specifically the ability to compensate for failed transactions. Service bus tools will often say they offer "try-catch" or something like that but the ability to run scoped componsation is the mark of a proper orchestration tool. Additionally the ability to wait, know its own state, or keep things in suspense is another indicator that you're dealing with an orchestrator and not a bus.
Speaking to 1000+ steps plus dozens of services, consider the if-then's in the process. If all the if-then statements in your 1000 steps speak only to routing with no change to the payloads then you're still in "routing" and therefore still in ESB. But if there's even one nested if-then and I start to look for different tools. Aside, if-thens that look like routing can very quickly impact business logic. Once business logic starts showing up then a better language such as BPEL or BPMN is better.
The example of an orchestra conductor is often given to describe how orchestration works, a central individual directing the musicians according to a score. Often what's left off is the idea that the conductor is not only directing, but listening as well, and if something goes wrong can compensate in a reliable, repeatable way.
For instance imagine our first conductor goes to bring in the tuba player but said tuba player has decided to go do something else. A simple pinball-style "orchestrator" will bring in the tuba section, knowing full well it isn't there, and then wait for the audience to complain later. A really savvy conductor would see the tuba gone, and immediately bring up the deeper baritone horns to compensate.
I'm working on a .net portal which would be having lots of concurrent users.
so scalability,performance need to be addressed in the design and architecture.
We plan to use load balancing in the application.
Keeping this in mind,what would be the best way of communicating between IIS web server(hosting aspx,aspx.cs files) and application server (hosting .net assemblies like business logic and data access layer)?
Should it be .net remoting or soap web service?or is there any other approach?
Thanks.
Is there another approach? Yes - don't distribute your objects.
The most scalable approach is to NOT to distribute your objects away from each other. Ask yourself, why do you want to deploy one flavor of code to an "app server" while another flavor of code goes to a "web server"? The communication that goes on between those two layers, if they are distributed, will be much much much much (etc etc) more expensive than a local call.
With today's 64-bit servers, with all of that memory, and the hot CPUs, and with ASP.NET's superior memory management, why not put your business logic and DAL on the same physical machine as the ASPX files? Why not?
If you need to scale, add more servers. Simple.
There are good reasons, of course, to distribute. The most common good reasons have to do with domains of ownership - along several axes: security management, or even budget and control. In other words, to take the latter case, if team is responsible for running the business logic and a separate team is responsible for building and running the web layer -then it may make sense to distribute those two things to allow independence of management. Most of the good reasons for distributing computer code, have their origins in the structures of the human organizations using or developing the code.
There is no good technical reason why a web page should not run on the same CPU, sharing the same CLR VM and memory heap, as the database access layer.
Regardless what you do with distribution, it would be unwise to architect your system with less-than-formal interfaces defining the connections between the layers. If you keep formal interfaces, then it should be no problem for you to measure the perf and efficiency of a distributed approach versus a co-located approach.
Do you really need an app server? Just how big are you talking exactly? For example Stackoverflow.com has ~50k uniques a day and doesn't have an app server so I assume you are talking much bigger than that? Most performance bottle necks come down to database issues so I would concentrate on that.
I suggest you take a look at the Patterns and Practices groups guidelines for performance, more specifically Chapter 6 - Improving ASP.NET Performance of the guideline. I agree with Cheeso that you should seriously consider NOT physically splitting your application layer and UI layer if you can. The P&P guideline has the following notes:
Avoid Unnecessary Process Hops
Although process hops are not as expensive as machine hops, you should avoid process hops where possible. Process hops cause added overhead because they require interprocess communication (IPC) and marshaling. For example, if your solution uses Enterprise Services, use library applications where possible, unless you need to put your Enterprise Services application on a remote middle tier.
Understand the Performance Implications of a Remote Middle Tier
If possible, avoid the overhead of interprocess and intercomputer communication. Unless your business requirements dictate the use of a remote middle tier, keep your presentation, business, and data access logic on the Web server. Deploy your business and data access assemblies to the Bin directory of your application. However, you might require a remote middle tier for any of the following reasons:
You want to share your business logic between your Internet-facing Web applications and other internal enterprise applications.
Your scale-out and fault tolerance requirements dictate the use of a middle tier cluster or of load-balanced servers.
Your corporate security policy mandates that you cannot put business logic on your Web servers.
If you absolutely have to split the application logic up anyways, you could use WCF as a transport mechanism. I'm not sure how it stacks up against remoting when it comes to performance. However, I seem to remember that this is the guideline Microsoft is pushing.
Clemens Vasters (Technical Lead for the Microsoft .NET Service Bus) talks about WCF vs. Remoting in this answer on MSDN forums.
Learn to write asynchronously.
Explore the CCR runtime for example.
Each thread that is blocked waiting for IO responses is one less available to your system.
Turn off 'idealised logging' leave the ability to switch it back on via admin console. But logging is often a hidden bottle neck.
CACHE CACHE CACHE!
If it was expensive to get the data the first time, don't pay for it the second!
Avoid ASP.net's Session State - This can seriously bloat and lead to a large slow down in page responsiveness.
Modify the http headers to specify short browser caching (5sec - 20sec) (Depends on the nature of the content)
Utilise GZIP while you are at it!
AND USE LOTS OF RAM
Here are my tips
1)Move all your static files - images , css, js to a load balancer like nginx. This will greatly reduce the load on IIS server and it will have enough free resources to serve the main request.
2)Think about caching and avoiding database access altogether.
3)Try to implement REST principles are far as possible.
4)Keep session state to a bare minimum - if possible avoid it altogether.
There are some good performance and scalability points in these articles from Omar Al Zabir.
10 ASP.NET Performance and Scalability Secrets
and
99.99% available ASP.NET and SQL Server SaaS Production Architecture
(also check out his book Building a Web 2.0 Portal with ASP.NET 3.5)