In a new project, I'm planning to use ActiveDirectoryMembershipProvider and SqlRoleProvider to provide authentication and authorization, respectively.
One thing that isn't clear to me is how maintenance is handled -- when users that have logged in and been assigned roles are removed from Active Directory, how do you remove orphaned records in the mapping table used by SqlRoleProvider? I believe this is the aspnet_UsersInRoles table.
One could query Active Directory periodically for disabled users, then iterating through that list calling Roles.RemoveUserFromRoles(UserId, Roles.GetRolesForUser(UserId)) where UserId is also in aspnet_UsersInRoles. Hugely slow, I would imagine, for a large organization.
Or, alternatively, for each distint UserId in UsersInRoles, query ActiveDirectory and ensure the userAccountControl attribute's bitmask doesn't indicate the account is disabled. Also very inefficient, for a large number of application users.
An even more ugly but much more efficient approach would be to store last login date and periodically purge role associations for users that haven't logged in for, say, six months. This might cause headaches.
I'd love to hear suggestions.
Yes, you have to manually do the cleanup. Do you need instantaneous update? If you can do a batch process that runs nightly, that would be efficient since it isn't running during core operational hours. Or, it might make sense to kick off a process in another thread to handle the deletion of the role as soon as you are aware of it. Removing roles per user access shares the hit across users and makes them think that the application is slow.
How many times are roles removed? If a lot, then consider a batch process, if once in a few years, then it probably isn't as much of an issue to work it into the application during some process.
As far as how too, you can use the API, but the aspnet_UsersInRoles and aspnet_roles tables could be easily wiped on their own accord too via SQL Script.
HTH.
Related
Is there a way to protect the database from deletion? I mean it's very easy to click on the "x" next to the root node. This would destroy the whole app and cause an enourmous mess to deal with.
How to deal with this fragility?
EDIT:
Let's assume I have two firebase accounts: one for testing and one for the launched app. I regularly log in and out to use the other one. On the test account I delete whole nodes on a regular basis. An activated password protection would avoid a very expensive confusion of the two accounts.
If you give a user edit access to the Firebase Console of your project, the user is assumed to be an administrator of the database. This means they can perform any write operation to the database they want and are not tied to your security rules.
As a developer you probably often use this fact to make changes to your data structure while developing the app. For application administrators, you should probably create a custom administrative dashboard, where they can only perform the actions that your code allows.
There is no way to remove specific permissions, such as limiting the amount of data they can remove. It could be a useful feature request, so I suggest posting it here. But at the moment: if you don't trust users to be careful enough with your data, you should not give them access to the console.
As Travis said: setting up backups may be a good way to counter some of this anxiety.
I have an ASP.NET MVC application, that suffers a horrible affliction. In one of the post methods the user is able to submit an update. This update takes maybe 10 seconds to compute, and impatitient users sometimes click more than once. I belive this is causing a database update race condition, and I don't know what to do. Where should I save the "isUpdating"-variable in order to block such repeat requests? It can't be a webrole instance, since those are independent, and my user may end up on one or the other. Nor can it be the database, because of the race condition. I'm sure there must be a stanard way. I could for example see a scenario where I restrict users to specific webroles. Is that possible, or is there a better way?
In this case it would probably be better to write the information from the user to a queue, then return the page to the user straight away.
Then have a worker role that picks the information out of the queue and updates the database.
I am working on a web app in ASP.NET/C# which needs to be scalable to handle the high user load (will probably run in a web farm). Since it will cater to a high number of users, around 1 Million plus, but number of online users would be around 30K-50K. I plan to use caching (provider based), and was wondering:
Is it a good idea to cache ALL users for performance? I plan to cache all other generic data, like settings etc, but how efficient would it be to cache ALL users in memory? If a user changes his/her profile, I will reload only that particular user in cache (having a collection of all the users). Any suggestions on this approach?
Do I need to worry about locking when using this above users cache? Only one editing the profile would be the user himself, that would be one atomic operation, though there will be multiple read oeprations in different threads. So while fetching users from cache, or updating a particualr user, should I use lock?
Thanks
Asif
Putting anything in Global Cache that is only useful to a single user is usually a bad idea and a performance killer. Optimize your database queries, and you will be in much better shape.
As a general rule of thumb you should only keep things in cache that are expensive to get from the database, and more than one user will want to see that information at once. Such as a list of the top 100 products or something. Small amounts of data that are relatively cheap to grab from the database, and that are only useful to a single person should stay where they are.
Caching increases complexity tremendously, and even more so in a web farm. Don't introduce needless complexity unless you absolutely have to. Wait until you have an actual performance problem before trying to solve it.
Caching users is probably a good idea. But it depends on how much data you are going to cache for each user, and the cost of retrieving that data from wherever it is stored.
For locking - can anyone else edit a user's profile (like an admininstrator)? Would that be a common occurrence? If so, you may want to do some locking. Otherwise, if only the user can edit their own stuff, I wouldn't bother.
My company is building an ASP.NET HR application and we have decided to create one database per client. This ensures that clients cannot accidentally view another client's data, while also allowing for easy scalability (among other benefits, already discussed here).
My question is - what is the best way to handle security and data access in such a scenario? My intent is to use a common login/account database that will direct the user to the correct server/database. This common database would also contain the application features that each user/role has access.
I was not planning to put any user information in each individual client database, but others on my team feel that the lack of security on each database is a huge hole (but they cannot articulate how duplicating the common access logic would be useful).
Am I missing something? Should we add an extra layer of security/authentication at the client database level?
Update:
One of the reasons my team felt dual user management was necessary is due to access control. All users have a default role (e.g. Admin, Minimal Access, Power User, etc.), but client admins will be able to refine permissions for users with access to their database. To me it still seems feasible for this to be in a central database, but my team doesn't agree. Thoughts?
We have a SaaS solution that uses the one DB per client model. We have a common "Security" database too. However, we store all user information in the individual client databases.
When the user logs into the system they tell us three pieces of information, username, password and client-id. The client-id is used to lookup their home database in the "security" database, and then the code connects to their home database to check their username/password. This way a client is totally self-contained within their database. Of course you need some piece of information beyond username to determine their home database. Could be our client-id approach, or could be the domain-name requested if you're using the sub-domain per client approach.
The advantage here is that you can move "client" databases around w/out having to keep them synced up with the security database. Plus you don't need to deal w/cross-db joins when you're trying to lookup user information.
Update: In response to your update... One of the advantages to each customer having their own DB is also the ability to restore a customer if they really need it. If you've split the customer's data into two databases how do you restore it? Also, again, you'll need to worry about cross-db data access if the users are defined in a DB other than the home DB.
I've always been of the opinion that security should be enforced at the application level, not the database level. With that said, I see no problem with your intended approach. Managing accounts and roles through a central database makes the application more maintainable in the long run.
You may want to look into using the ASP.NET membership provider for handling the authentication plumbing. That would work with your stated approach and you can still keep all of the authentication data in a separate database. However, I agree with Chris that keeping one DB will utlimately be more maintainable.
I'm really asking this by proxy, another team at work has had a change request from our customer.
The problem is that our customer doesn't want their employees to login with one user more than one at the same time. That they are getting locked out and sharing logins.
Since this is on a web farm, what would be the best way to tackle this issue?
Wouldn't caching to the database cause performance issues?
You could look at using a distributed cache system like memcached
It would solve this problem pretty well (it's MUCH faster than a database), and is also excellent for caching pretty much anything else too
It's just a cost of doing business.
Yes, caching to a database is slower than caching on your webserver. But you've got to store that state information in a centralized location, otherwise one webserver isn't going to know what users are logged into another.
Assumption: You're trying to prevent multiple concurrent log-ins by a single user.
A database operation at login and logout won't cause a performance problem.
If you are using a caching proxy, that will cause a problem:
a user will log out, but won't be able to log back in until the logout reaches the cache
Your biggest potential problem might be:
if the app/box crashes without a chance for the user to log out, the user's state in the database will remain "logged in".
It depends on how the authentication is done. If you store the last successful login datetime (whatever the backend), so maybe you can change the schema to store a flag "logged_in" and that won't involve an extra performance cost. (ok, it's not clean at all)