Rolling Deployment of a web application in a web farm - asp.net

We have asp.net web application with sql server that is deployed on server farm with federated databases. We use stored procedures (as opposed to prepared sql statements) and inproc sessions. As part of the achieving high availability (at least for service packs with controlled set of changes), we intend to use rolling deployments on the farm which means we do this:
Shut down a group of servers
Deploy the application on these servers
Bring up these servers
Shut down another group. Repeat 1-3 for all the groups.
Though this would mean some users would be kicked off, the application is still available and maintainence page need not be put up.
The easy part is to deploy the web application, but the tougher part is if there are changes in the stored procedure (for e.g. a new parameter is added). There will be a point when the both the versions of stored procedure would be required (the existing one and the new one being deployed).
We have considered 4 options for the stored procedures:
Do not use rolling deployments in case a release has a stored procedure change
If rolling deployment is being used in a release, only new stored procedures would be allowed, even if it means code duplication
Introduce stored procedure versioning and some framework component in the app tier to automatically append the version number to the sproc being invoked.
Overwrite the existing stored procedure and allow some stored procedure calls to fail.
All the approaches have pros and cons and of these 3) is the most viable but also most complex.
Which one would you recommend? Are there any tricks in sql server to handle this scenario? Are there any other approaches?

If you want to cover any type of changes to your database, you might want to take a look at database mirroring and rolling upgrades.
Excerpt from link:
Improves the availability of the production database during upgrades.
To minimize downtime for a mirrored database, you can sequentially upgrade the instances of SQL Server that are participating in a database mirroring session. This will incur the downtime of only a single failover. This form of upgrade is known as a rolling upgrade.

Related

Web Application taking long time to execute

Hi I am writing a web application and it connects to 700 Databases and executes a basic SELECT query.
For example:
There is a button to retrieve Managers of each branch.
There are 700 branches of a company and each of the branch details are stored in separate databases.
Select query retrieves 1 record from each of the database and returns the Manager of that branch.
So executing this code takes a long time.
I cannot make the user wait till such time (30 minutes)
Due to memory constraints I cannot use multi threading.
Note: This web application uses Spring MVC. Server Tomcat7.
Any workaround possible?
With that many databases to query, the only possible solutions I can see is caching. If real time is not a concern (note that 30 minutes of execution will push you out of real time anyway), then you might explore the following possibilities, all of which require centralizing data into a single, logical or physical database:
Clustering: put the database servers in a huge cluster, which is configured for performance hence uses caching internally. Depending upon licence costs, this solution might be too impractical or even too expensive.
Push data to a central database: all of the 700 database servers would push the data you need to a central database that your application will use. You can use database servers' replication features (such as in MSSQL or PostgreSQL) or scheduled data transfers. This method requires administrative access to the database servers to either configure replication or drop scripts to run on a scheduled basis.
Pull data from a central database host: have a centralized host fetch the required data into a local database, the tables of which are updated through scheduled data transfers. This is the simplest method. Its drawback is that real time querying is impossible.
It is key to transfer only the data you need. Make your select statements as narrow as possible to limit execution time.
The central database could be your web application server or a distinct machine if your resource constraints are tight. I've found PostgreSQL, with little effort, has an excellent compatibility with MSSQL. Without further information it's difficult to be more accurate.

NoSQL and AppFabric with Azure

I have an ASP.net application that I'm moving to Azure. In the application, there's a query that joins 9 tables to produce a user record. Each record is then serialized in json and sent back and forth with the client. To increase query performance, the first time the 9 queries run and the record is serialized in json, the resulting string is saved to a table called JsonUserCache. The table only has 2 columns: JsonUserRecordID (that's unique) and JsonRecord. Each time a user record is requested from the client, the JsonUserCache table is queried first to avoid having to do the query with the 9 joins. When the user logs off, the records he created in the JsonUserCache are deleted.
The table JsonUserCache is SQL Server. I could simply leave everything as is but I'm wondering if there's a better way. I'm thinking about creating a simple dictionary that'll store the key/values and put that dictionary in AppFabric. I'm also considering using a NoSQL provider and if there's an option for Azure or if I should just stick to a dictionary in AppFabric. Or, is there another alternative?
Thanks for your suggestions.
"There are only two hard problems in Computer Science: cache invalidation and naming things."
Phil Karlton
You are clearly talking about a cache and as a general principle, you should not persist any cached data (in SQL or anywhere else) as you have the problem of expiring the cache and having to do the deletes (as you currently are). If you insist on storing your result somewhere and don't mind the clearing up afterwards, then look at putting it in an Azure blob - this is easily accessible from the browser and doesn't require that the request be handled by your own application.
To implement it as a traditional cache, look at these options.
Use out of the box ASP.NET caching, where you cache in memory on the web role. This means that your join will be re-run on every instance that the user goes to, but depending on the number of instances and the duration of the average session may be the simplest to implement.
Use AppFabric Cache. This is an extra API to learn and has additional costs which may get quite high if you have lots of unique visitors.
Use a specialised distributed cache such as Memcached. This has the added cost/hassle of having to run it all yourself, but gives you lots of flexibility in the long run.
Edit: All are RAM based. Using ASP.NET caching is simpler to implement and is faster to retrieve the data from cache because it is on the same machine - BUT requires the cache to be populated for each instance of the web role (i.e. it is not distributed). AppFabric caching is distributed but is also a bit slower (network latency) and, depending what you mean by scalable, AppFabric caching currently behaves a bit erratically at scale - so make sure you run tests. If you want scalable, feature rich distributed caching, and it is a big part of your application, go and put in Memcached.

Live Data Web Application Design

I'm about to begin designing the architecture of a personal project that has the following characteristics:
Essentially a "game" containing several concurrent users based on a sport.
Matches in this sport are simulated on a regular basis and their results stored in a database.
Users can view the details of a simulated match "live" when it is occurring as well as see results after they have occurred.
I developed a similar web application with a much smaller scope as the previous iteration of this project. In that case, however, I chose to go with SQLite as my DB provider since I also had a redistributable desktop application that could be used to manually simulate matches (and in fact that ran as a standalone simulator outside of the web application). My constraints have now shifted to be only a web application, so I don't have to worry about this additional level of complexity.
My main problem with my previous implementation was handling concurrent requests. I made the mistake of using one database (which was represented by a single file on disk) to power both the simulation aspect (which ran in a separate process on the server) and the web application. Hence, when users were accessing the website concurrently with a live simulation happening, there were all sorts of database access issues since it was getting locked by one process. I fixed this by implementing a cross-process mutex on database operations but this drastically slowed down the performance of the website.
The tools I will be using are:
ASP.NET for the web application.
SQL Server 2008 R2 for the database... probably with an NHibernate layer for object relational mapping.
My question is, how do I design this so I will achieve optimal efficiency as well as concurrent access? Obviously shifting to an actual DB server from a file will have it's positives, but do I need to have two redundant servers--one for the simulation process and one for the web server process?
Any suggestions would be appreciated!
Thanks.
You should be fine doing both on the same database. Concurrent access is what modern database engines are designed for. Concurrent reads are usually no problem at all; concurrent writes lock the minimum possible amount of data (a table, or even just a number of rows), not the entire database.
A few things you should keep in mind though:
Use transactions wisely. On the one hand, a transaction is an important tool in making sure your database is always consistent - in short, a transaction either happens completely, or not at all. On the other hand, two concurrent transactions can cause deadlocks, and those buggers can be extremely hard to debug.
Normalize, and use constraints to protect your data integrity. Enforcing foreign keys can save the day, even though it often leads to more cumbersome administration.
Minimize the amount of time spent on data access: don't keep connections around when you don't need them, make absolutely sure you're not leaking any connections, don't fetch data you know don't need, do as much data-related processing (especially things that can be solved using joins, subqueries, groupings, views, etc.) in SQL instead of in code

How can I handle a web application that connects to many SQL Server databases?

I am building an ASP.NET web application that will use SQL Server for data storage. I am inheriting an existing structure and I am not able to modify it very much. The people who use this application are individual companies who have paid to use the application. Each company has about 5 or 10 people who will use the application. There are about 1000 companies. The way that the system is currently structured, every company has their own unique database in the SQL Server instance. The structure of each database is the same. I don't think that this is a good database design but there is nothing I can do about it. There are other applications that hit this database and it would be quite an undertaking to rewrite the DB interfaces for all of those apps.
So my question is how to design the architecture for the new web app. There are times of the month where the site will get a lot of traffic. My feeling is that the site will not perform well at these times because I am guessing that when we have 500 people from different companies accessing the site simultaneously that they will each have their own unique database connection because they are accessing different SQL Server databases with different connection strings. SQL Server will not use any connection pooling. My impression is that this is bad.
What happens if they were to double their number of customers? How many unique database connections can SQL Server handle? Is this a situation where I should tell the client that they must redesign this if they want to remain scalable?
Thanks,
Corey
You don't have to create separate connections for every DB
I have an app that uses multiple DBs on the same server. I prefix each query with a "USE dbName; "
I've even run queries on two separate DB's in the same call.
As for calling stored procs, it's a slightly different process. Since you can't do
Use myDB; spBlahBLah
Instead you have to explicity change the DB in the connection object. In .Net it looks something like this:
myConnection.ChangeDatabase("otherDBName");
then call your stored procedure.
Hopefully, you have a single database for common items. Here, I hope you have a Clients table with IsEnabled, Logo, PersonToCallWhenTheyDontPayBills, etc. Add a column for Database (i.e. catalog) and while you're at it, Server. You web application will point to the common database when starting up and build the list of database connetions per client. Programmatically build your database connection strings with the Server and Database columns in the table.
UPDATE:
After my discussion with #Neil, I want to point out that my method assumes a singleton database connection. If you don't do this then it would be silly to follow my advice.
Scaling is a complex issue. However why are you not scaling the web aspect as well? Then the connection pooling is limited to the web application.
edit:
I'm talking about the general case here. I know tha pooling occurs at many levels, not just the IDbConnection (http://stackoverflow.com/questions/3526617/are-ado-net-2-0-connection-pools-pre-application-domain-or-per-process). I was wondering whether the questioner had considered scaling at the we application level.

Performance issues with ASP.NET MVC/WCF site & Oracle backend

We are building an extranet loan status check website using ASP.NET MVC with a WCF backend. Its a pretty standard design with the MVC site using a WCF service reference to get customer objects. The ervice uses an Oracle backend + http binding, and won't be hosted on the same server as the MVC site (so we can't use tcp binding to reduce latency).
The problem we encountered is that every call to the service is resulting in a 7-8s response time which is unacceptable for an extranet site and much higher than the 2s magic mark. The service method(s) call 12 stored procedures to create the customer object. The database is, unfortunately, denormalized (we can't change it as its also used by other inhouse production systems) so most of the calls are basic select statements which populate the customer object and its associated objects. The service proxy is properly opened and closed/disposed in the MVC actions so there are no instances of any service connection leaks. A new client proxy is created for every request (i.e., we are not using the singleton pattern for the service).
Any ideas how we can speed this up ?
Thanks
It sounds like you already know where the problem is - it's the database.
I've never heard of a WCF operation taking more than a fraction of a second to set up and tear down, excluding any logic inside. So even if you could shave off 1-2 seconds of latency (which is probably an optimistic estimate), that doesn't really help if the database operation takes 5-6 seconds by itself.
Honestly? Running 12 stored procedures to create a customer is completely off-the-wall. The purpose of a stored procedure is to encapsulate all of the logic necessary to perform a complex database operation. The very first thing you need to do is change this to be one stored procedure - then if it's still slow, profile the database to see what's taking so long and fix it accordingly. Usually poor database performance is due to one or more missing indexes.
Until you accurately measure what is really happening, don't be too quick to assume where the bottleneck is.
You really need to do an Oracle extended SQL trace to see where that slowness is coming from. Anything other than that is mostly guesswork. Here is a paper from Cary Millsap (of Method R and formerly of Hotsos) that you can download that details doing this:
http://method-r.com/downloads/doc_details/10-for-developers-making-friends-with-the-oracle-database-cary-millsap

Resources