wcf implementation - asp.net

I am analysing various technologies in asp.net for better usage to have all it benefits(such as code reuse, avoiding time consumption by knowing technology usage etc) . currently working with wcf implementation . where will i be using it more effectively ? any suggestions would be appreciated
thank you

You'll be using WCF in scenarios where you need to transport data between logical layers in different physical layers.
For example, a client-server application that should stream data from the server to the client and viceversa.
Or a web application that exposes a Web Service API.
It's all about simplifying socket programming over TCP, UDP and other protocols on top of these like HTTP and SOAP.
If you need a networked solution, WCF is one of best ways of easly acquire good results in less time, and gain a configurable and easy to deploy, easy to host n-tier program.

WCF is a programming model that allows the developer to create distributed systems that follow SOA.
If you are looking at a system which needs to be loosely coupled and if you as a developer want to control the flow of data between layers to a great extent then a WCF service is the one for you.
take a look at Wrox's Professional WCF Programming if you could. you will get a good know how
`Dinesh

Related

Service oriented and three-tier architecture together?

Can I use SOA and three-tier in one application? Using SOAP to connect for external services, using MQ and Spring MVC. I have an application server and server for the database.
Yes, it seems that they are very different things.
There is a vast array of different SOA implementation styles, you can do it in so many ways. One is to have services talking to APIs over http where the APIs in question are written in a 3 tier style.
You can also have an event driven architecture where messages are passed on a message bus or a queue or some sort, the services reading and writing to those queues can be written in a three-tier style.
To sum up yes you can have SOA and three-tier in the same architectural solution, one does not exclude the other.
Does this help your decision on how to structure your service design and how to structure the wider solution architecture?

has every webservice a service broker?

I study SOA and webservices for a science paper. My state of knowledge is, that every SOA architcture needs a service broker.
Webservices are concrete implementations of a SOA, so do they have a service broker after? For example I create a webservice in asp.net which returns "hallo world" By Creating it, do I create a service broker too?
Don't let fool you by answers which are copy paste from Wikipedia :-)
Webservices are concrete implementations of a SOA
This assumption/statement is wrong. At least there is no direct relationship between SOA and webservices. SOA is an architectural paradigm where a webservice is a concrete technology (stack) based on WSDL and its result, the SOAP-protocol. Nothing more. Webservices may help to establish loosely coupled service landscape, which the SOA paradigm expects. But you could also build up a SOA landscape with other technology stacks (self-written hacks, RMI, even based on REST for instance).
Repository
The thing is: When you start building up your SOA-landscape, you (or others) will code services (i.e. webservices) where your service will have a technical contract (WSDL, WADL, ..) as a base for the implementation. Your clients will ask for it and you want it to store somewhere. This somewhere is usually a service repository. You could develop your own one, use the UDDI-standard or just buy one of products by the big vendors (IBM, TIBCO, Oracle etc).
Broker
A message broker within the SOA context is some piece of software, which supports the decoupling of the connected partner systems. Commonly it's called ESB (enterprise service bus). Also one of the goals of the SOA paradigm is, that the services can be used by anyone (reusability). Therefore you don't want to connect your services by P2P-connections (aka spaghetti architecture) - just imagine that one of the service participants changes it's hardware/IP: this would be a nightmare for all the connected partner systems. That's why the ESB was invented which acts between the service consumer and the service provider.
Typically, these ESB-products support a lot of technologies or -stacks/APIs like HTTP, JMS, REST etc.
Source: I work with a self-claimed SOA landscape and thousands of different (web-)services for a big company for a long time now.
A Web service is a set of related application functions that can be programmatically invoked over the Internet. Businesses can dynamically mix and match Web services to perform complex transactions with minimal programming. Web services allow buyers and sellers all over the world to discover each other, connect dynamically, and execute transactions in real time with minimal human interaction.
Web services are self-contained, self-describing modular applications that can be published, located, and invoked across the Web.
A network component in a Web Services architecture can play one or
more fundamental roles: service provider, service broker, and service
client.
Service brokers register and categorize published services and provide search services. For example, UDDI acts as a service broker for WSDL-described Web services.

ASP.Net applications splitted in different tier

I'm dealing with an architecture approach that doesn't convince me deeply, but I can't figure out a good alternative.
Basically our company is pretty much always under attack... I mean there's constant attempt of phishing, spoofing, bruteforcing, etc. attacks to our public web site.
Decision has been taken to delpoy the web apps on a 3 tier configuration (I really mean tier, not layer as often they're confused), letting the presentation tier being on a machine, in a subnet exposed to the internet, and the rest of the application being on an internal subnet, with an application server that basically expose everything needed by the presentation server or other internal system, and obvioulsy a database server.
Now, the separation between presentation and business logic, is very heavy both in terms of development and in terms of performance: use a DLL dedicated to perform data access is obviuosly faster than call a remote service to perform the same operation.
This, on the other, has been seen as the most pratical way to improve security: in this deployment model the presentation layer could be break throug by some hacker, but still, from there, they gain no access to the database, and that's the important part.
The communication between the presentation tier and the business tier is made through WCF services, exposing authenticated SOAP methods.
Moreover the approach is to let all the business logic in the BL tier, letting the presentation tier be quite "stupid": so the BL tier is not merely a secured DAL layer, but contains all the logic.
Is that an acceptable approach? There's some smarter way to handle such scenario out there, that we're missing? Would you prefer to keep BL on the presentiation tier, living just the DAL in the middle tier? And for what reason?
I'm feeling pretty unsure about our solution, so I'm asking for any advice.

MOM vs SOA? the difference?

they have many common features. but how the difference?
MOM allow asynchronous while SOA does not, this is the only difference?
SOA, Service Oriented Architecture, is an architecture that defines how to structure access to business information between different applications. In a nutshell, usually, one application needs something done with a piece of information (may it be an orderfile or anything else) that application has a need. Another application may be able to do the corresponding processing of that piece of information, hence it has a capability. The first application then Consumes the Service of the second application, which Provides the Service (no matter the underlying technology, which can be anything such as JMS, HTTP/SOAP, HTTP/REST, EMail, FTP, etc.). To make this work, a Contract between the first application and the Service has to be defined which clears such things out as Message Format (XSD or similar), Protocol (HTTP/SOAP? JMS?) etc.
MOM, Message Oriented Middleware, on the other hand is just a family of software/middleware platforms. They are actual implementations, and not a high-level concept like SOA. They can be used to implement a SOA architecture, an Event Driven architecture or other architectures. Usually, MOM enriches a set of applications with asynchronous messaging where a MOM server stores and forwards the messages. Often things such as transactions, guranteed delivery, fail-over, loose coupling and load balancing are built into MOM implementations. Examples of MOM are IBM WebSphere MQ, Apache ActiveMQ, RabbitMQ, JBoss HornetQ, etc.
Message oriented middleware (MOM) is a type of technology where as SOA is a type of architecture. Even though a lot of people think about web-service when they talk about SOA, you can use MOM to implement it as well (in fact in many cases that's the better option)

Addressing scalability ,performance in a .net web application

I'm working on a .net portal which would be having lots of concurrent users.
so scalability,performance need to be addressed in the design and architecture.
We plan to use load balancing in the application.
Keeping this in mind,what would be the best way of communicating between IIS web server(hosting aspx,aspx.cs files) and application server (hosting .net assemblies like business logic and data access layer)?
Should it be .net remoting or soap web service?or is there any other approach?
Thanks.
Is there another approach? Yes - don't distribute your objects.
The most scalable approach is to NOT to distribute your objects away from each other. Ask yourself, why do you want to deploy one flavor of code to an "app server" while another flavor of code goes to a "web server"? The communication that goes on between those two layers, if they are distributed, will be much much much much (etc etc) more expensive than a local call.
With today's 64-bit servers, with all of that memory, and the hot CPUs, and with ASP.NET's superior memory management, why not put your business logic and DAL on the same physical machine as the ASPX files? Why not?
If you need to scale, add more servers. Simple.
There are good reasons, of course, to distribute. The most common good reasons have to do with domains of ownership - along several axes: security management, or even budget and control. In other words, to take the latter case, if team is responsible for running the business logic and a separate team is responsible for building and running the web layer -then it may make sense to distribute those two things to allow independence of management. Most of the good reasons for distributing computer code, have their origins in the structures of the human organizations using or developing the code.
There is no good technical reason why a web page should not run on the same CPU, sharing the same CLR VM and memory heap, as the database access layer.
Regardless what you do with distribution, it would be unwise to architect your system with less-than-formal interfaces defining the connections between the layers. If you keep formal interfaces, then it should be no problem for you to measure the perf and efficiency of a distributed approach versus a co-located approach.
Do you really need an app server? Just how big are you talking exactly? For example Stackoverflow.com has ~50k uniques a day and doesn't have an app server so I assume you are talking much bigger than that? Most performance bottle necks come down to database issues so I would concentrate on that.
I suggest you take a look at the Patterns and Practices groups guidelines for performance, more specifically Chapter 6 - Improving ASP.NET Performance of the guideline. I agree with Cheeso that you should seriously consider NOT physically splitting your application layer and UI layer if you can. The P&P guideline has the following notes:
Avoid Unnecessary Process Hops
Although process hops are not as expensive as machine hops, you should avoid process hops where possible. Process hops cause added overhead because they require interprocess communication (IPC) and marshaling. For example, if your solution uses Enterprise Services, use library applications where possible, unless you need to put your Enterprise Services application on a remote middle tier.
Understand the Performance Implications of a Remote Middle Tier
If possible, avoid the overhead of interprocess and intercomputer communication. Unless your business requirements dictate the use of a remote middle tier, keep your presentation, business, and data access logic on the Web server. Deploy your business and data access assemblies to the Bin directory of your application. However, you might require a remote middle tier for any of the following reasons:
You want to share your business logic between your Internet-facing Web applications and other internal enterprise applications.
Your scale-out and fault tolerance requirements dictate the use of a middle tier cluster or of load-balanced servers.
Your corporate security policy mandates that you cannot put business logic on your Web servers.
If you absolutely have to split the application logic up anyways, you could use WCF as a transport mechanism. I'm not sure how it stacks up against remoting when it comes to performance. However, I seem to remember that this is the guideline Microsoft is pushing.
Clemens Vasters (Technical Lead for the Microsoft .NET Service Bus) talks about WCF vs. Remoting in this answer on MSDN forums.
Learn to write asynchronously.
Explore the CCR runtime for example.
Each thread that is blocked waiting for IO responses is one less available to your system.
Turn off 'idealised logging' leave the ability to switch it back on via admin console. But logging is often a hidden bottle neck.
CACHE CACHE CACHE!
If it was expensive to get the data the first time, don't pay for it the second!
Avoid ASP.net's Session State - This can seriously bloat and lead to a large slow down in page responsiveness.
Modify the http headers to specify short browser caching (5sec - 20sec) (Depends on the nature of the content)
Utilise GZIP while you are at it!
AND USE LOTS OF RAM
Here are my tips
1)Move all your static files - images , css, js to a load balancer like nginx. This will greatly reduce the load on IIS server and it will have enough free resources to serve the main request.
2)Think about caching and avoiding database access altogether.
3)Try to implement REST principles are far as possible.
4)Keep session state to a bare minimum - if possible avoid it altogether.
There are some good performance and scalability points in these articles from Omar Al Zabir.
10 ASP.NET Performance and Scalability Secrets
and
99.99% available ASP.NET and SQL Server SaaS Production Architecture
(also check out his book Building a Web 2.0 Portal with ASP.NET 3.5)

Resources