I'm developing an asp.net web application targeted to business customers. Can anyone provide some guidelines on how I can determine the number of users my application can support?
Also, the application uses session variables so its currently limited to one web server until that changes.
You can use a session state server running on a second box or sql server backed sessions to get around the single box issue.
As for the question at hand, there is no real way to determine that besides getting your hands on production hardware, setting up the app and running load tests until you can figure out where she breaks. This won't necessarily give you the real number as you have to make assumptions about what the users are doing and it is pretty much impossible to simulate the effects of the network cloud in test environments.
Only you/your team can determine the exact numbers that can be supported.
Your key to here is having a deep understanding of your problem domain and a distinct separation of the processing layers.
The separation allows you to isolate the bottlenecks, and tune the performance of the lowest performance factor much more easily, then move on to the next layer/performance limitation.
Do not make assumptions, as you will find impact unrelated to your assumptions that might surprise you.
Design to scale
Design to have separate “layers” for performance tuning reasons as well as your own sanity – it is also a better design principle and this is directly, one of the reasons development is segmented.
Test – design of “pass/fail” testing of layers to a design specification is only one facet of testing. Your question is answered by the performance impact of the technology, architecture and tools you choose to utilize in your application. Plan to make changes to each part of your application to address performance issues.
Gather performance metrics from each “layer”, tune each layer as you make discoveries of performance challenge nature. Plan for and discover how to quantify the performance measurements of each layer.
You WILL at some juncture have to make a compromise between performance and “cool/wow” factors. Each will impact your ability to market your solution, and you must then determine which will have the greatest impact.
This is one of the PAIN factors that I use to measure quality in designs – Plan All Incremental Needs and have discused elsewhere and in blogs.
Personally, I will often make decisions on design based on performance, but your marketing strategy might differ.
So you know, session can be made to work with multiple web servers very simply by moving it out to a sql server database.
A quick how to is available here
As for your original question, I would look into load testing. Hopefully there will be other posters who know more about that. I would focus on page views, as opposed to users.
Measure the resources (CPU, memory, disk, bandwidth) needed for typical actions within your application. Divide available resources by resources needed for a representative user "session" and you have a rough number.
Until you have a good set of real data, you'll have to make guesses about typical usage habits and the resource requirements. That's about all you can do for a 1st pass at estimating capacity.
A good load balancer can ensure that a user will return to the same server.
Related
One of the key benefits provided by Onion architecture is the ability to swap out "infrastructure" elements, such as "Data Access, I/O, and Web Services" (http://jeffreypalermo.com/blog/the-onion-architecture-part-3/).
Jeff says in his post from 2008 that "the industry has modified data access techniques at least every three years".
Does anyone have an example of a reasonably large project where Onion architecture was used and swapping out of key infrastructure elements was subsequently undertaken?
I'm interested to understand:
How common is this scenario, in general?
My instinct tells me that while "data access techniques" may be modified every three years, changes to actual infrastructure for running solutions, which would allow this benefit to be realised, may be a lot less frequent?
What were the conditions were that the solution was operating under originally?
What caused the change in the underlying infrastructure?
Are there lessons to be learned about the practical implications of changing infrastructure in this way, which may allow us to refine original implementations of the Onion architecture?
I'm interested to hear whether there were unexpected changes required beyond just replacing the infrastructure component and implementing the same interface. For example, did the new infrastructure require new arguments to be passed to previously defined methods e.g. SaveOrder(int ID) -> SaveOrder(int ID, bool AllowSiblings, bool SiblingCreated) when moving from a Relational to NoSQL DB model.
Did the implementation of this architecture + rework to migrate to new infrastructure significantly decrease the total effort required, if compared to a traditional, coupled approach?
Do developers find coupled, hard-referenced code easier to write and debug than loosely coupled, indirectly referenced code, but the eventual payoff for infrastructure changes makes this worth it?
Well, IMHO, the primary intent of such architecture style (Hexagoanl, Ports&Adapters, Onion …) is that it allows you to focus on your domain, how you will deliver value instead of focusing first on UI, frameworks or storage issues. It allows you to defer such decisions.
As Jeffrey says, the ability to swap out "infrastructure" elements is a nice side effect of such architecture style. Even if you will not switch from one RDBMS to another every 6 months, it’s quite reassuring knowing that it would be possible doing it without pain, though.
Rather than thinking about changing your storage mechanism on a regular basis or as you said “swapping out of key infrastructure elements”, just think about third parties services that you’d plug to your system. Those are eager to change on a regular basis; you would also switch from one provider to another. This is quite a more common scenario we are used to face with on a more regular basis. In this particular case, the domain behavior won’t change, the interfaces will stay the same, you won’t have to change a single line of code into your core domain layer. Only the implementation made somewhere in your infrastructure layer might have to change. That’s a another noteworthy benefit from that kind of architecture!
Please read this nice Uncle Bob article about Clean Architecture where he explains why the ability to defer critical infrastructure decision is really cool!
--- EDIT ---
Could you provide an example of where you have swapped out a third party service?
We have tons of examples where we switched from one provider to another (from payment providers to live feeds providers or whatever provider). The business stays the same, the domain behaviors are still the same. Changing a provider should not have any kind of impact on your business. You don’t have to change the way your business work, where the value really is, just because you change from one provider to another, it makes no sense. Isolating your domain behaviors in an independent core layer, with no dependencies on any third parties libraries, frameworks or provider services, definitely help you to deal with changes.
I have the feeling that you’re trying to convince yourself whether to go with Onion. You might be on the wrong track only thinking about migrating to new infrastructure related stuff (db, third parties stuff...). Focus on your domain instead. Ask yourself if your domain is complex enough to require such an architecture style. Don’t use a bazooka to kill a fly. As Simon Brown says: "Principles are good, but make sure they’re realistic and don’t have a negative impact"!
If your application is quite small, with no complex business domain, go for classic n-tiers architecture, that’s ok; don’t change things just for the sake of it or just because of any buzzword. But also keep in mind that an isolated core business layer without dependencies, as in Onion architecture, might be very easy to unit test!
Now for your additional questions:
Did the implementation of this architecture + rework to migrate to new infrastructure significantly decrease the total effort required, if compared to a traditional, coupled approach?
It depends! :-) In tightly coupled applications as soon as there’s a new infrastructure element to be migrated, there is little doubt that you’ll surely have to modify code in every layers (including the business layer). But if this application is small, quite straightforward, well organized with a descent test code coverage, this shouldn’t be a big deal. Now, if it’s quite big, with a more complex business domain, it might be a good idea to isolate that layer in a totally separate layer with no dependencies at all, ensuring that infrastructure changes won’t cause any business regression.
Do developers find coupled, hard-referenced code easier to write and debug than loosely coupled, indirectly referenced code, but the eventual payoff for infrastructure changes makes this worth it?
Well, ask your teammates! Are they used to work with IOC? Remember that architecture design and choices must be a team decision. This must be something shared by the whole team.
I need to build a reliable predictive dialer based on Asterisk. Currently the system we use includes Wombat and Asterisk, and we do not find this solution usable as Wombat provides a poor API and it's impossible to use it without regular manual operations.
The system we want:
Can be used solely via API or direct database queries (adding lists to campaigns, updating lists, starting campaigns, stopping campaigns etc.) so that it can be completely integrated into an existing product
Is free, or paid for annually independent to the usage rate
Is considered stable
Should be able to handle tens of thousands of calls per day, if it matters
Use vicidial.org or hire freelancer to build new core with your needed api.
You can also check OSdial for this, it also developed using asterisk.
We have been working with a preview of the next version of Wombat, through the Early Access program, and Wombat has a complete configuration and reporting JSON API and you can deploy it "headless" in order to scale up to thousands of parallel lines. If you ask Loway they can likely get you access to the Early Access program.
BTW, Vicidial is great for agent-based outbound, but imposes quite a large penalty on the number of agents per server - you cannot reasonably use it to do telecasting at the scale we are looking for as it would require too many servers. Wombat is leaner and can drive over one thousands channel per server. YMMV.
This question would be better placed on a "hire-a-freelancer" site like oDesk ... if you need custom programing done, those are the sorts of places to go to get manpower.
Your specifications are well within what is possible with Asterisk. I'd strongly recommend looking at Vici Dial and OS Dial as others have suggested; out of the can, they are pretty good.
The hard part of any auto-dialer is not the dialer, oddly enough. It's the prediction algorithms, the answering machine detection algorithms and the agent UI. Those are what makes or breaks an auto-dialer application for a company.
Situation: The dba is an offsite contractor who keeps the entire DAL code checked out in TFS. It would be nice as the front end developer to be able to add columns, and tweak procs and whatnot, without having to rely on waiting for this dude to respond to your emails to do the work.
Question: What would be a recommended solution/process that would allow for more rapid/agile development, while maintaining data integrity as well as peace love and happiness among the team?
Im getting some good feedback on Programmers HERE
There is no general technical answer to your question (unless you can define a very limited kind of needed access, which can be supplied via an API he provides for you in the DAL, etc.).
Assuming you already tried to talk with him and probably even escalated the issue, there is probably a valid reason for limiting access (security, data model integrity, performance tuning, version control etc.).
Try to understand the reasoning behind his approach, and to better define your actual needs, it is possible that after that you can formulate an improvement to your architecture (such as the aforementioned API) or your development process. Most importantly, talk frankly about your concerns, communication can go a long way, as long as you are willing to understand the other side.
I'm working on a forum based website, the site also supports onsite messaging (ie. the users can send private messages to other users), what I'm trying to do is notify a member if they have new messages, for example by displaying the inbox link in bold and also the number of messgages, e.g. Inbox(3)
I'm a little confused how this can be implemented for a website running on a server farm, querying the database with every request seems like an overkill to me, so this is out of question, probably a shared cache should be used for this, I tend to think this a common feature for many sites including many of the large ones (running on server farms), I wonder how they implement this, any ideas are appreciated.
SO caches the questions, however every postback requeries your reputation. This can be seen by writing a couple of good answers quickly, then refreshing the front page.
The questions will only change every minute or so, but you can watch your rep go up each time.
Waleed, I recommend you read the articles on high scalability. They have specific case studies on the architectures of various mega scale web applications. (See the side bar on the right side of the main page.)
The general consensus these days is that RDBMs usage in this type of application is a bottle neck. It is also probably safe to say that most of the highly scalable web applications sacrifice consistency to achieve availability.
This series should be informative of various views on the topic. A word on scalability is highly cited.
In all this, keep in mind that these folks are dealing with Flickr, Amazon, Tweeter scale issues and architectures. The solutions are somewhat radical departures from the (previously accepted) norms and unless your forum application is the next Big Thing, you may wish to first test out the conventional approach to determine if it can handle the load or not.
When starting a new ASP.NET application, with the knowledge that at some point in the future it must scale, what are the most important design decisions that will allow future scalability without wholsesale refactoring?
My Top three decisions are
Disabling or storing session state
in a database.
Storing as little as possible in session state.
Good N-Tier Architecture. Separating business logic and using Webservices instead of directly accessing DLL's ensures that you can scale out both the business layer as well as the presentation layer. Your database will likely be able to handle anything you throw at it although you can probably cluster that too if needed.
You could also look at partitioning data in the database too.
I have to admit though I do this regardless of whether the site has to scale or not.
These are our internal ASP.Net Do's and Don't Do's for massively visited web applications:
General Guidelines
Don't use Sessions - SessionState=Off
Disable ViewState completely - EnableViewState=False
Don't use any of the complext ASP.Net UI controls, stick to basic (DataGrid vs. Simple repeater)
Use fastest and shortest data access
mechanisms (stick to sqlreaders on
the front site)
Application Architecture
Create a caching manager with an abstraction layer. This will allow you to replace the simple System.Web.Cache with a more complex distributed caching solution in the future when you start scaling you application.
Create a dedicated I/O manager with an abstraction layer to support future growth (S3 anyone?)
Build timing tracing into your main pipelines which you can switch on and off, this will allow you to detect bottle necks when such occur.
Employ a background processing mechanism and move whatever is not required to render the current page for it to chew on.
Better yet - consider firing events from your application to other applications so they can do that async work.
Prepare for database scalability, place your own layer so that you can later decide if you want to partition you database or alternatively work with several read servers in a master-slave scenario.
Above all, learn from others successes and failures and stay positive.
Ensure you have a solid caching policy for transient / static data. Database calls are expensive especially with separate physical servers so be aggressive with your caching.
There are so many considerations, that one could write a book on the subject. In fact, there is a great book and it is free. ;-)
Microsoft has released Improving .NET Application Performance and Scalability as a PDF eBook.
It is worth reading cover to cover, if you don't mind the droll writing style. Not only does it identify key performance scenarios, but also establishing benchmarks, measuring performance, and how to apply what you learn.