Could Umbraco Membership handle say 1 million users?
The Member data wouldn't be managed in backend but just by users on their my account page
is this feasible using the default setup or would I need to create a custom setup?
Theoretically it should work, since it uses the asp.net membership provider, which runs on sql server which can scale almost without limits.
That said, with a million users you are going to need some serious hardware to run the sql server database. If I had to guess as your number of users increase and starts to approach very large numbers, umbraco will have the problem(bottleneck) before the membership provider does.
Indeed, theoretically you're only limited to SQL Server specs. See this URL for more information:
http://our.umbraco.org/forum/core/general/11920-Maximum-number-of-content-nodes-for-Umbraco-45
Related
First of all let me clear that I am not from a web background so if any of my understanding about how it works is not correct please feel free to correct me
Let's say I have a website which I would like to host on cloud because
- I don't want to take care of hardware
- I want to scale my website as needed
Now I am a bit confused between role of SQL Server vs role of SQL Azure in this case.
Normal Web Hosting
When I think of a normal website I know that I need a host/server on which my website will be hosted. The host should be able to support SQL Server. For scaling purpose I will have to host my website/ASP Pages on multiple servers. Similarly if I want to scale up my SQL Server I will have to host it on multiple servers and will have to make sure data is up to date in all servers through some mechanism.
Cloud Based Hosting
Now I think I can setup similar structure on Cloud/Azure as well. If yes, would I be using true capabilities of Cloud in this case?
Or should I use SQL Azure instead of SQL Server? What benefit would I get in that case? Would I be still be responsible for for scaling up and consistency of data? I know I can scale up the website by setting the number of VMs/instance but what about scaling of database?
Edit
Thanks to Florin Dumitrescu the terminology I wanted to use was Scaling Out because I am more concerned about the performance rather than how big my database is in terms of size. I am more concerned about how database would scale between different servers/systems to accommodate the load and hence result in better performance
SQL Azure, as Yossi mentioned, is a Database-as-a-Service. As such, you simply ask for it to be provisioned, magic happens, and you have a database that scales from 1GB to 5GB, 10GB, all the way to 50GB (soon to be 150GB as announced at SQL PASS). The nice thing about SQL Azure: you don't have to worry about any infrastructure, servers, licensing, etc. You simply connect with your connection string. SQL Azure is designed to be scalable to handle a considerable number of concurrent tenants, so you don't have to concern yourself with scaling.
SQL Azure also replicates its data in the data center, to provide "durable" storage. You still need to design a Disaster Recovery scheme, in case the data center becomes unavailable (and you can use the Data Sync service for that).
As far as your website itself: As you scale out to multiple instances, each instance runs the same code and uses the same resources. Taking this one step further, you can move your static (non-changing) web content, such as images and CSS, to Blob storage. This has several advantages over storing them with the website itself:
Ability to enable the Content Delivery Network, a worldwide edge-caching service providing better performance for your end users
Less strain on your web server instances, as requests for those images will now be directed to Blob storage, a completely separate URL than your website
Ability to update an image or stylesheet without having to re-deploy your application - simply upload a new file to Blob storage.
I highly recommend the Windows Azure Platform Training Kit, as there are labs that take you through the fundamentals of all of this, with complete code samples as well. This is updated almost monthly, staying in sync with the latest Windows Azure SDK and tools.
If you're hosting your web site in the cloud, and you need a database, than SQL Azure is almost certainly the best option.
SQL Azure is a database as a service, so you'll create your database and work against it from your code, but not have to worry about the provisioninig, there are no servers as such, it is all being taken care of.
From an application point of view it looks and behaves pretty much like SQL Server, so initially all that changes is the connecting string
As other noted SQL Azure takes away your concerns about setting up and taking care of the infrastructure. This is part of the premise of Azure in general which is to provide a platform rather than just Infrastructure.
The price you pay for that are some limitations on the capabilities (vs. regular SQL). Limitation on the size (at least until Federation will be available) and increased latency (since your database is not running on the same server of your app)
Microsoft Teched has as "SQL Azure Performance and Elasticity Guide" which you should probably take a look at
I have been looking for a solution to allow us to monitor our web servers performance counters over an asp.net website.
Is there an existing tool that I can make use of to accomplish this or will I need to roll my own?
The only solution I have found online is the use of perfmon to connect to the remote server, this I need to avoid.
The only criteria we need is the ability to select or configure what counter are used and a web interface to few these counters at a later date. We need a historical record or the servers performance.
We are using asp.net websites on IIS.
Thanks
Using perfmon remotely is the standard way to monitor performance counters remotely. This is done by sys admins across the globe.
Why do you need to avoid this?
However, your will need to roll out your own. I had done this in the past (users who could not figure out perfmon...).
In terms of historical data - you will need to poll the performance counters yourself and record the data somehow (database, flatfiles etc).
You can also setup a website to display current values, control and configure performance counters - the accounts the site runs under will required sufficient permissions, however.
What issues do I need to be aware of when I am deploying an ASP.NET application as a web farm?
All session state information would need to be replicated accross servers. The simplest way would be to use the MSSQL session state provider as noted.
Any disk access, such as dynamic files stored by users, would need to be on an area avialable to all servers. Such as by using some form of Network Attached storage. Script files, images and html etc would just be replicated on each server.
Attempting to store any information in the application object or to load information on application startup would need to be reviewed. The events would fire each time the user hit a new machine in the farm.
Machine keys across each server is a very big one as other people have suggested. You may also have problems if you are using ssl against an ip address rather than a domain.
You'll have to consider what load balancing strategy your going to go through as this could change your approach.
Sessions is a big one, make sure you use SQL Server for managing sessions and that all servers point to the same SQL Server instance.
One of the big ones I've run across is issues with different machineKeys spread across the different servers. ASP.NET uses the machineKey for various encryption operations such as ViewState and FormsAuthentication tickets. If you have different machineKeys you could end up with servers not understanding post backs from other servers. Take a look here if you want more information: http://msdn.microsoft.com/en-us/library/ms998288.aspx
Don't use sessions, but use profiles instead. You can configure a SQL cluster to serve them. Sessions will query your session database way too often, while profiles just load themselfs, and that's it.
Use a distributed caching store like memached for caching data, and ASP.Net cache for stuff you'll need alot
Use a SAN or an EMC to serve your static content
Use S3 or something similar to have a fallback on 3.
Have some decent loadbalancer, so you can easily update per server, without ever needing to shut down the site
HOW TO: Set Up Multi-Server ASP.NET Web Applications and Web Services
Log aggregation is easily overlooked - before processing HTTP logs, you might need to combine them to create a single log that includes requests sent to across servers.
I am using a ASP.net session in SQL Server mode. I have created the necessary tables and stored procs in a custome db
My question is:
Can I use this database to serve more than one application / web site ?
Is there anything to take into consideration having multiple websites use the same db for their session store
cheers
Yes you can use this database to server more than one site. The session provider will take care of the semantics of that.
It would make the profiling more difficult if there is a performance problem. Why not create a second state db for the second application? It's not much to do, simple with a different name and specify the different db in your session configuration.
The short answer though is you can use the same session database and each session should be fine, though I wonder if anyone has any comments on colliding sessionIds between the two applications.
I am looking for a best practice for End to End Authentication for internal Web Applications to the Database layer.
The most common scenario I have seen is to use a single SQL account with the permissions set to what is required by the application. This account is used by all application calls. Then when people require access over the database via query tools or such a separate Group is created with the query access and people are given access to that group.
The other scenario I have seen is to use complete Windows Authentication End to End. So the users themselves are added to groups which have all the permissions set so the user is able to update and change outside the parameters of the application. This normally involves securing people down to the appropriate stored procedures so they aren't updating the tables directly.
The first scenario seems relatively easily to maintain but raises concerns if there is a security hole in the application then the whole database is compromised.
The second scenario seems more secure but has the opposite concern of having to much business logic in stored procedures on the database. This seems to limit the use of the some really cool technologies like Nhibernate and LINQ. However in this day and age where people can use data in so many different ways we don't foresee e.g. mash-ups etc is this the best approach.
Dale - That's it exactly. If you want to provide access to the underlying data store to those users then do it via services. And in my experience, it is those experienced computer users coming out of Uni/College that damage things the most. As the saying goes, they know just enough to be dangerous.
If they want to automate part of their job, and they can display they have the requisite knowledge, then go ahead, grant their domain account access to the backend. That way anything they do via their little VBA automation is tied to their account and you know exactly who to go look at when the data gets hosed.
My basic point is that the database is the proverbial holy grail of the application. You want as few fingers in that particular pie as possible.
As a consultant, whenever I hear that someone has allowed normal users into the database, my eyes light up because I know it's going to end up being a big paycheck for me when I get called to fix it.
Personally, I don't want normal end users in the database. For an intranet application (especially one which resides on a Domain) I would provide a single account for application access to the database which only has those rights which are needed for the application to function.
Access to the application would then be controlled via the user's domain account (turn off anonymous access in IIS, etc.).
IF a user needs, and can justify, direct access to the database, then their domain account would be given access to the database, and they can log into the DBMS using the appropriate tools.
I've been responsible for developing several internal web applications over the past year.
Our solution was using Windows Authentication (Active Directory or LDAP).
Our purpose was merely to allow a simple login using an existing company ID/password. We also wanted to make sure that the existing department would still be responsible for verifying and managing access permissions.
While I can't answer the argument concerning Nhibernate or LINQ, unless you have a specific killer feature these things can implement, Active Directory or LDAP are simple enough to implement and maintain that it's worth trying.
I agree with Stephen Wrighton. Domain security is the way to go. If you would like to use mashups and what-not, you can expose parts of the database via a machine-readable RESTful interface. SubSonic has one built in.
Stephen - Keeping normal end users out of the database is nice but I am wondering if in this day and age with so many experienced computer users coming out of University / College if this the right path. If someone wants to automate part of their job which includes a VBA update to a database which I allow them to do via the normal application are we losing gains by restricting their access in this way.
I guess the other path implied here is you could open up the Application via services and then secure those services via groups and still keep the users separated from the database.
Then via delegation you can allow departments to control access to their own accounts via the groups as per Jonathan's post.