Addressing scalability ,performance in a .net web application - asp.net

I'm working on a .net portal which would be having lots of concurrent users.
so scalability,performance need to be addressed in the design and architecture.
We plan to use load balancing in the application.
Keeping this in mind,what would be the best way of communicating between IIS web server(hosting aspx,aspx.cs files) and application server (hosting .net assemblies like business logic and data access layer)?
Should it be .net remoting or soap web service?or is there any other approach?
Thanks.

Is there another approach? Yes - don't distribute your objects.
The most scalable approach is to NOT to distribute your objects away from each other. Ask yourself, why do you want to deploy one flavor of code to an "app server" while another flavor of code goes to a "web server"? The communication that goes on between those two layers, if they are distributed, will be much much much much (etc etc) more expensive than a local call.
With today's 64-bit servers, with all of that memory, and the hot CPUs, and with ASP.NET's superior memory management, why not put your business logic and DAL on the same physical machine as the ASPX files? Why not?
If you need to scale, add more servers. Simple.
There are good reasons, of course, to distribute. The most common good reasons have to do with domains of ownership - along several axes: security management, or even budget and control. In other words, to take the latter case, if team is responsible for running the business logic and a separate team is responsible for building and running the web layer -then it may make sense to distribute those two things to allow independence of management. Most of the good reasons for distributing computer code, have their origins in the structures of the human organizations using or developing the code.
There is no good technical reason why a web page should not run on the same CPU, sharing the same CLR VM and memory heap, as the database access layer.
Regardless what you do with distribution, it would be unwise to architect your system with less-than-formal interfaces defining the connections between the layers. If you keep formal interfaces, then it should be no problem for you to measure the perf and efficiency of a distributed approach versus a co-located approach.

Do you really need an app server? Just how big are you talking exactly? For example Stackoverflow.com has ~50k uniques a day and doesn't have an app server so I assume you are talking much bigger than that? Most performance bottle necks come down to database issues so I would concentrate on that.

I suggest you take a look at the Patterns and Practices groups guidelines for performance, more specifically Chapter 6 - Improving ASP.NET Performance of the guideline. I agree with Cheeso that you should seriously consider NOT physically splitting your application layer and UI layer if you can. The P&P guideline has the following notes:
Avoid Unnecessary Process Hops
Although process hops are not as expensive as machine hops, you should avoid process hops where possible. Process hops cause added overhead because they require interprocess communication (IPC) and marshaling. For example, if your solution uses Enterprise Services, use library applications where possible, unless you need to put your Enterprise Services application on a remote middle tier.
Understand the Performance Implications of a Remote Middle Tier
If possible, avoid the overhead of interprocess and intercomputer communication. Unless your business requirements dictate the use of a remote middle tier, keep your presentation, business, and data access logic on the Web server. Deploy your business and data access assemblies to the Bin directory of your application. However, you might require a remote middle tier for any of the following reasons:
You want to share your business logic between your Internet-facing Web applications and other internal enterprise applications.
Your scale-out and fault tolerance requirements dictate the use of a middle tier cluster or of load-balanced servers.
Your corporate security policy mandates that you cannot put business logic on your Web servers.
If you absolutely have to split the application logic up anyways, you could use WCF as a transport mechanism. I'm not sure how it stacks up against remoting when it comes to performance. However, I seem to remember that this is the guideline Microsoft is pushing.
Clemens Vasters (Technical Lead for the Microsoft .NET Service Bus) talks about WCF vs. Remoting in this answer on MSDN forums.

Learn to write asynchronously.
Explore the CCR runtime for example.
Each thread that is blocked waiting for IO responses is one less available to your system.
Turn off 'idealised logging' leave the ability to switch it back on via admin console. But logging is often a hidden bottle neck.
CACHE CACHE CACHE!
If it was expensive to get the data the first time, don't pay for it the second!
Avoid ASP.net's Session State - This can seriously bloat and lead to a large slow down in page responsiveness.
Modify the http headers to specify short browser caching (5sec - 20sec) (Depends on the nature of the content)
Utilise GZIP while you are at it!
AND USE LOTS OF RAM

Here are my tips
1)Move all your static files - images , css, js to a load balancer like nginx. This will greatly reduce the load on IIS server and it will have enough free resources to serve the main request.
2)Think about caching and avoiding database access altogether.
3)Try to implement REST principles are far as possible.
4)Keep session state to a bare minimum - if possible avoid it altogether.

There are some good performance and scalability points in these articles from Omar Al Zabir.
10 ASP.NET Performance and Scalability Secrets
and
99.99% available ASP.NET and SQL Server SaaS Production Architecture
(also check out his book Building a Web 2.0 Portal with ASP.NET 3.5)

Related

ASP.Net applications splitted in different tier

I'm dealing with an architecture approach that doesn't convince me deeply, but I can't figure out a good alternative.
Basically our company is pretty much always under attack... I mean there's constant attempt of phishing, spoofing, bruteforcing, etc. attacks to our public web site.
Decision has been taken to delpoy the web apps on a 3 tier configuration (I really mean tier, not layer as often they're confused), letting the presentation tier being on a machine, in a subnet exposed to the internet, and the rest of the application being on an internal subnet, with an application server that basically expose everything needed by the presentation server or other internal system, and obvioulsy a database server.
Now, the separation between presentation and business logic, is very heavy both in terms of development and in terms of performance: use a DLL dedicated to perform data access is obviuosly faster than call a remote service to perform the same operation.
This, on the other, has been seen as the most pratical way to improve security: in this deployment model the presentation layer could be break throug by some hacker, but still, from there, they gain no access to the database, and that's the important part.
The communication between the presentation tier and the business tier is made through WCF services, exposing authenticated SOAP methods.
Moreover the approach is to let all the business logic in the BL tier, letting the presentation tier be quite "stupid": so the BL tier is not merely a secured DAL layer, but contains all the logic.
Is that an acceptable approach? There's some smarter way to handle such scenario out there, that we're missing? Would you prefer to keep BL on the presentiation tier, living just the DAL in the middle tier? And for what reason?
I'm feeling pretty unsure about our solution, so I'm asking for any advice.

Highly configurable and efficient ESB / SOA / integration framework

my plan is to develop or use a Java-based integration framework (ESB, SOA whatever) that deals with services, with the following constraints:
a Service can be deployed on multiple machines but doesn't have to be present on every one of them
a Service can be deployed and re-deployed (with a newer version) separately
a Service is connected to other services either by:
in-memory connections
(async / sync) remoting to other machines
the routing logic of the Service connections should be configurable on the fly, without re-deploying or restarting anything
I know that OpenESB is close to these requirements, however it requires redeployment of the service to change the routing (suppose the connections are HTTP BC based), but I'm unfamiliar in this regard with MuleESB, WSO2, JBossESB, whatever open source ESB... Is there any good solution for this (e.g. configurable in-memory and/or remoting routing)? I don't really care about clustering as I plan to use the servers separately, and the designated (if required) JMS solution would be HornetQ if that matters.
You mention several different concepts, but a combination of an ESB pattern, Apache Load Balancer and Maven should get you close. Do not get to hung up on the product, settle on a paradigm/pattern and the decision of the product will be easy, it either does things the way you like or does not.
Here is the pattern I use.
SOA Design Patterns
This may also interest you SOA for executives
Cheers
After a long discussions about the pros and cons, we are going to have a HornetQ-based (JMS MQ) solution, where we create message routing rules and sometimes processing codes that handle the different kind of routing. HornetQ is able to handle the in-jvm requirement too, but that part will be covered under the hood.

Differences between .NET application servers vs. Java application servers

I'd like to better understand the reasons for .NET's application server model compared to that used by most Java application servers.
In most cases I've seen with ASP.NET web applications, business logic is hosted in the web server's asp.net hosts processes. Another common approach is to have a physically or logically different tier which hosts your business objects and then are exposed as web services or accessed via mechanisms like WCF. The latter approach typically but not always seems to be used when higher scale is required. In the days of COM objects I've seen Microsoft Transaction Server (MTS) and later COM+ hosting used to host COM objects containing business logic, with MTS (theoretically) managing object lifetime, transactions, concurrency yada yada. This model largely seems to have disappeared in ASP.NET land.
In the Java world you might have Apache with Tomcat as the servlet container and your business objects hosted in Tomcat. In this case, Tomcat provides similar functionality to what MTS provided in the .NET world.
Several questions:
Why the fundamental difference in the Microsoft vs. Java approaches to application servers? This must have been an architecture/design choice when these frameworks were created.
What are the pros and cons of each approach?
Why did Microsoft move away from the MTS-hosting model (which is similar to the Tomcast servlet hosting model) to the more common current approach which is just to have business objects as part of the web server's ASP.NET process?
If you wanted to implement the MTS type approach or the Tomcat type approach in ASP.NET applications today I assume a common pattern would be to host business objects in some IIS process (possibly on some different physical or logical tier) and access via WCF (or standard asmx web services, whatever). Is this a correct assumption?
To my way of thinking, the primary difference is in the "open" approach vs. the "integrated stack" approach. Microsoft likes to provide everything as an integrated stack that all shares a common flavor and approach. Java is more friendly to the "bring your own x" model, where you may want to plug in your favorite application server, transaction manager, etc. Both technology stacks allow in-process invocation as well as remote invocation with varying levels of transaction support.
Really, WCF is not a new technology stack, but a reorganization and rebranding of existing elements of the .NET stack. Specifically, WCF took on the functions of .NET Remoting, Web Services, and distributed transactions.
You reference "the more common current approach which is just to have business objects as part of the web server's ASP.NET process." That is only common for non-distributed apps. It is a simple model that works well when all of your objects will reside on the same server. This follows Microsoft's "Scale Out" deployment model. Rather than segregating object tiers across servers, put everything but the database together on the web servers and then incrementally add identical, redundant servers to scale out the web-server layer.
Microsoft has been pushing hard lately on Service Oriented Architecture, which relies more heavily on WCF and remote invocation. This is seen as a key to the cloud strategy that would have people moving parts or all of their applications to flexible resources in the cloud (which MS would like to host with Azure and the like).

What tools does your company use to manage application performance of asp.net applications?

I am not talking about application profilers or debuggers but more specific to managing the applications in production environment. So essentially monitor, identify bottlenecks, deploy fixes.
For monitoring the application is up and running we use Nagios.
We also use good old performance monitor for monitoring database connections, memory consumption and CPU usage.
We use IPMonitor to verify uptime, and it has a lot of options for pinging the site for keyword validation, HTTP response validation, and response time. You can also use SNMP to figure out responsiveness of the processor and RAM, and remaining size on hard disks, among many other options. It supports multiple servers and types of servers, not just website or database.
Additionally, we test basic uptime and response speed with AlertSite.
A 3rd party, Keynote, tests our sites to verify that they are navigable like a human would browse. They have scripts to mimic clicks and interactions.
We use Spotlight for SQL server management, and also good old perfmon for the granular problem fixing.
We recently purchased WildMetrix to monitor and troubleshoot performance issues for our ASP.NET applications. It's nice because you can easily aggregate IIS, ASP.NET, and SQL Server information into a single graph or dashboard that allows you to pinpoint possible trouble spots. We currently use it for as our primary performance reporting and track tool, along with ELMAH for Exception Tracking.

High performance ASP.NET setup

I would like to ask you what is the best setup for a following application:
ASP.NET 3.5 Web site - used as a presentation layer, a lot of AJAX and JS. Will not hit the server a lot.
ASP.NET WCF - sevice providing all data to the application. It's responsible for validation, data modeling / preparing and communication with the DB Server.
Database - SQL Server 2005 Std, some logic is coded on the server side as stored procedures. Some of the logic can be a bit time consuming. In my opinion it's the most resource consuming part of the app.
The website can have up to 1000 users per minute. We can have up to 4 servers in the following configuration: Intel Bi Xeon Quad 8x 2.00+ GHz, 16 GB RAM, SSD or RAID drives.
What is the best way to place parts of the application on the physical servers? Will they handle this kind of load?
The less scalable place in any application is database server, you can add more web and application servers but you can't replicate DB with the same ease so you will benefit in a long run if DB will not contain any logic especially any long running logic. In a lot of the applications limiting factor is not cpu but memory think about user sessions if you store 1mb of data per user you applications will be able to support 64,000 silmantanius user sessions with you machines it may be sufficient or not. Both problems can be mitigated by using application level caching but this can cause it own set of problems because now you faced with stale data. To scale session based sites you will need to use smart load balancer solution that supports sticky sessions, for your loads most likely you will need hardware load balancer.
In the application you describe, I suspect that thread management is going to be a big issue. Throwing hardware at the problem may not be the best approach.
In terms of partitioning, it depends on whether you can leverage things like caching and cache notifications. If every call to the app has to hit the DB and run a lengthy stored procedure, then you may want to have more DB machines and fewer front-end web servers.
This is a big subject. In an attempt to provide a reasonably comprehensive answer to exactly this kind of question, I ended up writing a book about it: Ultra-Fast ASP.NET: Build Ultra-Fast and Ultra-Scalable web sites using ASP.NET and SQL Server.

Resources