How to implement locking across a server farm? - asp.net

Are there well-known best practices for synchronizing tasks across a server farm? For example if I have a forum based website running on a server farm, and there are two moderators trying to do some action which requires writing to multiple tables in the database, and the requests of those moderators are being handled by different servers in the server farm, how can one implement some locking functionality to ensure that they can't take that action on the same item at the same time?
So far, I'm thinking about using a table in the database to sync, e.g. check the id of the item in the table if doesn't exsit insert it and proceed, otherwise return. Also probably a shared cache could be used for this but I'm not using this at the moment.
Any other way?
By the way, I'm using MySQL as my database back-end.

Your question implies data level concurrency control -- in that case, use the RDBMS's concurrency control mechanisms.
That will not help you if later you wish to control application level actions which do not necessarily map one to one to a data entity (e.g. table record access). The general solution there is a reverse-proxy server that understands application level semantics and serializes accordingly if necessary. (That will negatively impact availability.)
It probably wouldn't hurt to read up on CAP theorem, as well!

You may want to investigate a distributed locking service such as Zookeeper. It's a reimplementation of a Google service that provides very high speed distributed resource locking coordination for applications. I don't know how easy it would be to incorporate into a web app, though.

If all the state is in the (central) database then the database transactions should take care of that for you.
See http://en.wikipedia.org/wiki/Transaction_(database)

It may be irrelevant for you because the question is old, but it still may be useful for others so i'll post it anyway.
You can use a "SELECT FOR UPDATE" db query on a locking object, so you actually use the db for achieving the lock mechanism.
if you use ORM, you can also do that. for example, in nhibernate you can do:
session.Lock(Member, LockMode.Upgrade);

Having a table of locks is a OK way to do it is simple and works.
You could also have the code as a Service on a Single Server, more of a SOA approach.
You could also use the the TimeStamp field with Transactions, if the timestamp has changed since you last got the data you can revert the transaction. So if someone gets in first they have priority.

Related

SignalR and Memcached for persistent Group data

I am using SignalR with my ASP.NET application. What my application needs is to pressist the groups data that is updated from various servers. According to SignalR documentation it's my responsibility to do this. It means that I need to use an external server/service that will collect the data from one or more servers and I can query that data from a single place.
I first thought that MemCached is the best candidate, because it's fast and the data that I need to put there is volatile. The problem is that I need to store collections, for example: collection A with user Ids, so I can have Collection A with 2000 user ids and Collection B with 40,000 ids. The problem is that I need to update this collection and remove and insert id very quickly. I afraid that because the commands will be initiated from several servers, and the fact that I might need to read the entire collection and update it on a either web servers, the data won't be consistent. Web Server A might update the data, but Server B will read the data before Server A finished updating it. There is a concurrency conflict.
I'm searching for the best way to implement this kind of strategy in my ASP.NET 4.5 application. I think that this might be a choice to use a in-memory database or that to insure no data integrity.
I want to ask you what is the best solution for my problem.
Here's an example for my problen:
MemCached Server - stores the collections (e.g. Collection A, B, C, D), each collection stores User Id's, which can be thousands of Ids and even much more.
Web Servers - My Amazon EC2 web servers with SignalR installed. Can be behind load balancer. Those servers need to gain access to the memcached server and get a complete collection items by the Collection name (e.g. "Collection_23"). They need to be able to remove items (User Id's) and add Items. All this should be fast as possible.
I hope that I explained myself right. Thanks.
Alternatively, you can use Redis, like Memcached everything is served from in-memory. Redis has many other capabilities beyond a simple key-value datastore; for your specific case you might use Redis transactions, which ensures data consistency.
In a comment in another post it shows a link to redis provider. The link is broken, it seems that it is now integrated in the main SignalR project: https://github.com/SignalR/SignalR/tree/master/src/Microsoft.AspNet.SignalR.Redis
You have the redis nuget here:
http://www.nuget.org/packages/Microsoft.AspNet.SignalR.Redis
and documentation here:
http://www.asp.net/signalr/overview/signalr-20/performance-and-scaling/scaleout-with-redis

Multi-Database Transactional System & ASP.NET MVC

So I have a challenge to build a site that people online can use to interact with organizations.: Asp.NET MVC Customer Application
One of the requirements is financial processing and accounting.
I'm very comfortable using SQL Transactions and stored procedures to do this; i.e. CreateCustomer also creates an entity, and an account record. We have a stored procedure to do this, that does a begin transaction, creates some setup records we need, then does a commit. I'm not seeing a good way to do this with an ORM, and after reading some great blog articles I'm starting to wonder if I'm going down the wrong path.
Part of the complexity here is the data itself:
I'm querying x databases (one per existing customer) to get some of my data, though my app has its own data store as well. I need to query the x databases, run stored procedures on the x databases, and also to my own datastore.
I'm not seeing strong support for things like stored procedures and thereby transactions, though it does seem to be present.
Maybe I'm just trying to make my app a nail here, cause the MVC hammer is sooo shiny. I'm plenty comfortable with raw ADO.NET of course, but I'm in love with the expressive feel to writing Linq code in C# and I'd rather not give up on it.
Down to the question:
Is this a bad idea? Should I try to use Linq / Entity Framework, or something like nHibernate... and stick with the ORM pattern or should I trash it and use raw ADO.NET data access?
Edit: a note on scale; from a queries per second standpoint this app is not "huge". But, from a data complexity perspective, it does need to query against 50+ databases (all identical, or close to it) to read data from an external application and publish data back to that application. ORM feels right when dealing with "my" data store, but feels very wrong for accessing the data from the external application.
From a certain size (number of databases) up, you have to change the paradigm. Are you at that size?
When you deploy what ultimately is a distributed application and yet try to controll it as an ordinary local application you are going to run into a set of fundamental issues around availability, scalability and correctness. If you use concepts like 'distributed transactions', 'linked servers' and 'ORM', your are down the wrong path. True distributed applications will use terms like 'message', 'queue' and and 'service'. Terms like Linq, EF, nHibernate are all fine and good, but none will bring you anything extra from what a simple Transact-SQL SELECT statement brings. In other words, if a SELECT solves your issues, then the client side various ORM will work. If not, they won't add any miraculos value.
I recommend you go over the slides on the SQLCAT: High Performance Distributed Applications in Real World Deployments which explain how a site like MySpace manages to read and write into a store of nearly 500 servers and thousands of databases.
Ultimately what you need to internalize is this: one database can have 95% availability (uptime and acceptable service response time). A system consiting of 10 databases with 95% availability has 59% availability. And a system of 100 databases each with 99.5% availability has 60% availability. 1000 databases with 99.95% availability (5 min downtime per week) have 60% availability. And this is for an ideal situation. In reality there is always a snowball effect caused by resource consumption (eg. threads blocked on trying to access an unavailable or slow resource) that makes things far worse.
This means that one cannot write a large distributed system relying on synchronous, tightly coupled operatiosn and transactions. Is simply impossible. You always rely on asynchronous operations (usually messaging and queues), which is something completely different from your run-of-the-mill database application.
use TransactionScope object available in System.Transaction.
What I have chosen is to use Entity Framework to allow access to the application's main data store, and create a custom DAL for access to external application data and for access to stored procedures within the application.
Here's hoping Entity Framework 4.0 fixes the issue. For now, I'm using the concept listed here.
http://social.msdn.microsoft.com/forums/en-US/adodotnetentityframework/thread/44a0a7c2-7c1b-43bc-98e0-4d072b94b2ab/

Using static data in ASP.NET vs. database calls?

We are developing an ASP.NET HR Application that will make thousands of calls per user session to relatively static database tables (e.g. tax rates). The user cannot change this information, and changes made at the corporate office will happen ~once per day at most (and do not need to be immediately refreshed in the application).
About 2/3 of all database calls are to these static tables, so I am considering just moving them into a set of static objects that are loaded during application initialization and then refreshed every 24 hours (if the app has not restarted during that time). Total in-memory size would be about 5MB.
Am I making a mistake? What are the pitfalls to this approach?
From the info you present, it looks like you definitely should cache this data -- rarely changing and so often accessed. "Static" objects may be inappropriate, though: why not just access the DB whenever the cached data is, say, more than N hours old?
You can vary N at will, even if you don't need special freshness -- even hitting the DB 4 times or so per day will be much better than "thousands [of times] per user session"!
Best may be to keep with the DB info a timestamp or datetime remembering when it was last updated. This way, the check for "is my cache still fresh" is typically very light weight, just get that "latest update" info and check it with the latest update on which you rebuilt the local cache. Kind of like an HTTP "if modified since" caching strategy, except you'd be implementing most of it DB-client-side;-).
If you decide to cache the data (vs. make a database call each time), use the ASP.NET Cache instead of statics. The ASP.NET Cache provides functionality for expiry, handles multiple concurrent requests, it can even invalidate the cache automatically using the query notification features of SQL 2005+.
If you use statics, you'll probably end up implementing those things anyway.
There are no drawbacks to using the ASP.NET Cache for this. In fact, it's designed for caching data too (see the SqlCacheDependency class http://msdn.microsoft.com/en-us/library/system.web.caching.sqlcachedependency.aspx).
With caching, a dbms is plenty efficient with static data anyway, especially only 5M of it.
True, but the point here is to avoid the database roundtrip at all.
ASP.NET Cache is the right tool for this job.
You didnt state how you will be able to find the matching data for a user. If it is as simple as finding a foreign key in the cached set then you dont have to worry.
If you implement some kind of filtering/sorting/paging or worst searching then you might at some point miss the quereing capabilities of SQL.
ORM often have their own quereing and linq makes things easy to, but it is still not SQL.
(try to group by 2 columns)
Sometimes it is a good way to have the db return the keys of a resultset only and use the Cache to fill the complete set.
Think: Premature Optimization. You'll still need to deal with the data as tables eventually anyway, and you'd be leaving an "unusual design pattern".
With event default caching, a dbms is plenty efficient with static data anyway, especially only 5M of it. And the dbms partitioning you're describing is often described as an antipattern. One example: multiple identical databases for multiple clients. There are other questions here on SO about this pattern. I understand there are security issues, but doing it this way creates other security issues. I've recently seen this same concept in a medical billing database (even more highly sensitive) that ultimately had to be refactored into a single database.
If you do this, then I suggest you at least wait until you know it's solving a real problem, and then test to measure how much difference it makes. There are lots of opportunities here for Unintended Consequences.

Caching data and notifying clients about changes in data in ASP.NET

We are thinking to make some architectural changes in our application, which might affect the technologies we'll be using as a result of those changes.
The change that I'm referring in this post is like this:
We've found out that some parts of our application have common data and common services, so we extracted those into a GlobalServices service, with its own master data db.
Now, this service will probably have its own cache, so that it won't have to retrieve data from the db on each call.
So, when one client makes a call to that service that updates data, other clients might be interested in that change, or not. Now that depends on whether we decide to keep a cache on the clients too.
Meaning that if the clients will have their own local cache, they will have to be notified somehow (and first register for notifications). If not, they will always get the data from the GlobalServices service.
I need your educated advice here guys:
1) Is it a good idea to keep a local cache on the clients to begin with?
2) If we do decide to keep a local cache on the clients, would you use
SqlCacheDependency to notify the clients, or would you use WCF for
notifications (each might have its cons and pros)
Thanks a lot folks,
Avi
I like the sound of your SqlCacheDependency, but I will answer this from a different perspective as I have worked with a team on a similar scenario. We created a master database and used triggers to create XML representations of data that was being changed in the master, and stored it in a TransactionQueue table, with a bit of meta data about what changed, when and who changed it. The client databases would periodically check the queue for items it was interested in, and would process the XML and update it's own tables as necessary.
We also did the same in reverse for the client to update the master. We set up triggers and a TransactionQueue table on the client databases to send data back to the master. This in turn would update all of the other client databases when they next poll.
The nice thing about this is that it is fairly agnostic on client platform, and client data structure, so we were able to use the method on a range of legacy and third party systems. The other great point here is that you can take any of the databases out of the loop (including the master - e.g. connection failure) and the others will still work fine. This worked well for us as our master database was behind our corporate firewall, and the simpler web databases were sitting with our ISP.
There are obviously cons to this approach, like race hazard, so we were careful with the order of transaction processing, error handling, de-duping etc. We also built a management GUI to provide a human interaction layer before important data was changed in the master.
Good luck! Tim

ASP.NET Masters: What are the advantages / disadvantages of using Session variables?

I've done a search on this subject already, and have found the same data over and over-- a review of the three different types of sessions. (InProc, Sql, StateServer) However, my question is of a different nature.
Specifically, what is the advantages/disadvantages of using the built in .NET session in the first place?
Here is why I am asking: A fellow .NET developer has told me to NEVER use the built in Microsoft Session. Not at all. Not even create a custom Session State Provider. His reasoning for this is the following--that if you have the Session turned on in IIS it makes all of your requests happen synchronously. He says that enabling session degrades the performance of a web server.
His solution to this is to create a session yourself-- a class that stores all values you need and is serialized in and out of the database. He advises that you store the unique ID to reference this in a cookie or a querystring variable. In our environment, using a DB to store the sessions is a requirement because all the pages we make are on web farms, and we use Oracle-- so I agree with that part.
Does using the built in Session degrade performance more than a home-built Session? Are there any security concerns with this?
So to sum it all up, what are the advantages/disadvantages?
Thanks to all who answer!
My experience has been that the session is a good means of managing state when you use it appropriately. However, often times it's misused, causing the "never ever use the session" sentiment shared by many developers.
I and many other developers have ran into major performance issues when we mistakenly used the session to store large amounts of data from a database, so as to "save a trip." This is bad. Storing 2000 user records per session will bring the web server to its knees when more than a couple of users use the application. Session should not be used as a database cache.
Storing an integer, however, per session is perfectly acceptable. Small amounts of data representing how the current user is using your application (think shopping cart) is a good use of session state.
To me, it's really all about managing state. If done correctly, then session can be one of many good ways to manage state. It should be decided in the beginning on how to manage state though. Most often times, we've run into trouble when someone decides to just "throw something in the session".
I found this article to be really helpful when using out-of-process modes, and it contains some tips that I would have never thought of on my own. For example, rather than marking a class as serializable, storing its primitive datatype members in separate session variables, and then recreating the object can improve performance.
Firstly, you colleague is implementing his own DB backed session management system, I do not see what advantage this has over using built in session state stored on a database (MS SQL is the default, there is no reason not to use Oracle instead).
Is his solution better than the built in one? Unlikely. It's way more work for you for a start. Here's a simple illustration of why. Let's say you use cookies to store your ID, how do you cope with a user who turns off cookies? If you are using ASP.Net's session state there's no problem as it will fall back to using the query string. With your colleagues idea you have to roll your own.
There is a very valid question as to whether you shold have session state at all. If you can design your application not to need any session state at all you will have a much easier time scaling and testing. Obviously you may have application state which needs to live beyond a session anyway (simple case beign user names and passwords), but you have to store these data anyway regardless of whether you have session state.
The MS implementation of Session State is not evil in and of itself... it is how some developers use it. As mentioned above, using the built-in session state provider means that you don't have to reinvent the security, aging, and concurrency issues. Just don't start jamming lots of garbage in the session because you're too lazy to figure out a better way to manage state and page transitions. Session doesn't scale really well... if each user on your site stuffs a bunch of objects in the session, and those objects take up a tiny bit of the finite memory available to your app, you'll run into problems sooner than later as your app grows in popularity. Use session in the manner for which it was designed: a token to represent that a user is still "using" your site. When you start to venture beyond that, either because of ignorance or laziness, you're bound to get burned.
You should be judicious in your use of Session, since multiple requests to the same Session object will usually be queued: see "Concurrent requests and session state" http://msdn.microsoft.com/en-us/library/ms178581.aspx.
Note that you can set EnableSessionState to ReadOnly to allow concurrent read access to session state.
This queuing is a good thing, as it means developers can use Session without being concerned about synchronization.
I would not agree with your colleague's recommendation to "never" use Session and I certainly wouldn't consider rolling my own.
First, a browser will only make two requests, to a given hostname, at a given time. For the most part these requests are for static content (JS files, CSS, etc). So, the serializing of requests to dynamic content aren't nearly the issue that one might think. Also, I think this may be confused with Classic ASP, where pages that use Session are definitely serialized, I don't believe this is the case with ASP.Net.
With ASP.Net session state (SQL mode, state server, or custom) you have an implementation that is standard, and consistent throughout an application. If you don't need to share session information this is your best bet. If you need to share information with other application environments (php, swing/java, classic asp, etc.) it may be worth considering.
Another advantage/disadvantage is that there has been a lot of developer focus on the built-in methodology for sessions with regards to performance, and design over rolling your own, even with a different provider.
Are there any security concerns with this?
If you roll your own you'll have to handle Session Fixation and Hijacking attacks, whereas using the built-in Session I think they are handled for you (but I could be wrong).
the home made session as you have described is doing nothing different "SQL" state of .Net sessions and in my experience i dont think session degrades your performance in anyway. building your own session manager will require putting in several other plumbing tasks along - security, flushing it out, etc.
the advantage with in-built sessions is its easy to use with all this plumbing already been taken care of. with "SQL" mode you can persist the session data in database thus allowing you to run your app on web-farms without any issues.
we designed a b2b ecommerce app for fortune 57 company which processes over 100k transactions a day and used sessions [SQL mode] quite extensively without any problems whatsover at all.
Correct me if I am wrong:
The primary advantage of storing Session state in a db, e.g., SQL Server, is that you are not consuming memory resources, but instead storing it to disk in a db.
The disadvantage is that you take an IO hit to retrieve this info from the database each time you need it (or maybe SQL Sever even does some magic caching of the data for you based on recently executed queries?)
In any event, this the price an IO to retrieve the session info from a db per trip to the web server seems like a safer strategy for sites that encounter a lot of traffic.

Resources