I've got an application that I'm trying to convert from a uni-tenant (single client, separate database, separate website) solution, to a partially multi-tenant one (single client, separate database, SHARED website).
I'm trying to figure out how to get my simplemembership setup to work in a situation where I can't have WebSecurity.InitializeDatabaseConnection associated with Application_Start. Instead, I want each session to setup its own 'web security context', independent of the application. The reason I need this behavior, is that my user/membership data is held in the uni-tenant (separate database) portion of the system, and, if initialized at app_start, only allows access to the FIRST tenant to visit the app. If I could, instead, make the websecurity system work within the session (or request) scope, this issue could be resolved.
I've been unable to find any documentation that says what I'm trying to do CAN NOT be done, but the indication is there all the same (In terms of lots of example posts saying that WebSecurity.InitializeDatabaseConnection should happen on app_start.
Short of completing the FULL transition to a multitenant database (which is on the schedule, but can't be completed in the allotted time), what can I do to workaround this 'limitation' ?
(my best idea so far, is that all client websites on a single website node (or better yet, across ALL nodes) would have to share a special login database)
Edit
2015-04-17
I found this post (ASP.NET - SimpleMembershipProvider initialized incorrectly, how do I reinitialize it?) which, though not being definitive, suggests that once the WebSecurity database is 'wired up', there is no facility for re-initializing later on.
Related
I am working on an ASP.NET webforms application with Entity Framework. Also for some reports it uses a dll and in that we have explicit query to get the records from SQL Server (such as ADO).
The problem is that when I change a column such as ParentID in SQL Server, I must to reset the website in IIS to see it and this solves the problem. This dependency is not logical and I want to know why this happens? Is there any relation to caching because of calling method in the dll?
How can I solve this problem?
When you run a query against SQL server (or any database, really), the result that you see is not the data "in the database", so to speak. The query returns a copy of that data that belongs only to you. The copy of the data gets sent over the network, to the client - in your case, an ASP.NET web application - and the application does whatever it needs to do, such as show it to a user.
Once the query which retrieved the data is complete, there is no longer any link between the data in the client, and the data in the database. There is no continuous, "live" connection between the two, even if your actual database connection is still open. The database connection is merely a way to send queries to the server, and for it to send copies of the data back.
It's like taking a copy of a file from a different machine. If you copy a file from my machine, and then I update my copy, your copy doesn't instantly get updated.
If you want data in some user interface to stay perfectly up to date with the data that actually exists in the database, you have a difficult problem to solve. There is no "easy" way to do this. Or perhaps more accurately, there is no simple or efficient way to do this.
This might seem odd to you. You're thinking "well, why not? Why doesn't it just show me the values as they actually exist?". The reason is that these systems need to be able to support many users - often thousands at once - who are all both reading the database and writing to it. Imagine someone was in the middle of updating data in the database, but then they rollback their transaction. Should you see the data as it was being modified, but not committed? What if two users are trying to update "the same" data at once? All sorts of concurrency questions come into play, which basically boils down to questions about locking.
What you are encountering here is a basic principle of multi-threaded environments, which translates to systems with multiple clients: Data can't be accessed directly by multiple people at the same time. Instead, you give each person their own immutable copy.
In a web application things are even more disconnected. When the browser requests the web page, the server side of the web application gets a copy of the data from the database, and then transmits that to the browser. Once the page is loaded there is no longer any link between the web server and the database server, or any link between the web server and the web browser at the client, and certainly no link between the web browser and the database.
Ultimately, this is one of the "hard problems" in computer science. You want to know how to tell the client to invalidate their "cache", and refresh their local data. There are a few mechanisms provided by .NET to do this with SQL Server, but they are quite technical. One of them is query notifications
For client security and privacy reasons, we want to deploy a unique database for each client while using the same website.
I envision that during the session_start event, we would determine which database to use for them (by looking at the subdomain they come in on) and set the connection string in a session variable. Then on every page_init, we'd dynamically set any object's connection string. In code behind, we'd do the same thing with the connection string.
Is there a better approach to doing this and will setting the connection string in page_init work? Is using a session variable wise? I've tended not to ever use them except when no other solution was possible.
The problem with the model itself it is really complex and can let you with some errors specially when we are talking about changes in the database. Imagine that you need to add an extra field on the interface. if you have 100 clients this will mean updating 100 different databases. when we talk about dealing with downtime them things get even worst.
I would do with that in a light different abstract your database layer create one api that will call the database. And from the website you always call the api passing the domain that you want the data to come from.
You can ask me what advantage this will give to you. The biggest one that you will see it is when doing upgrades and maintenance. Having one api per client it is a lot better to think them having one database per client. and if you really want to have just one (I would really recommend having one per client and deploying automatically) you can have a switch on the call and base with some parameters that you pass to the api ( can be on the header like the subdomain on the header) you can chose what database to connect.
Let me give you a sample scenario and how I would suggest to approach this scenario (that is true for database or api)
I want to include a new data field. So first thing it is to add this field on the backend (api or database) deploy this new field if it is one api you can even test that calling the api and see that the new field it is now returned that is not a problem for your ui because it is just a field that it does not use. after that you change the ui to actually use this field and deploy that to production.
I am no master, but I have been using Ruby-On-Rails for quite few years now and consider myself well-versed in it. Additionally, I have been working as web developer for last 10 years, starting with .Net.
I .Net we used to manually create database connection before firing any query or making a transaction. But Rails on the other hand, while spawning a new thread for request, fires a bag of initialization process which includes setting up a database connection.
Now we are working on a project, where we may not have a need for DB connection for every action. Is it somehow possible to override the default DB connection function and do it action-wise (a before_filter maybe)?
PS: Another way I thought of creating an additional Sinatra web application, which houses all such actions and use them instead to do the work or get the data.
Ehm where did you read Rails sets up a database connection for every request? My understanding is a connection is checked out from the connection pool when needed.
Also I'm surprised this is a big issue! If you don't need to hit the database (which implies no authentication, right?) then you should be caching the entire response, server-side and client-side.
Check out the guide on caching: http://guides.rubyonrails.org/caching_with_rails.html and Dalli https://github.com/mperham/dalli
Separating the client app from the data layer (so Rails on top of an API) is a nice architecture I've used for a project with success. I'd suggest Grape instead of Sinatra however.
I'm using Spring MVC 3 + Tiles for a webapp. I have a slow operation, and I'd like a please wait page.
There are two main approaches to please wait pages, that I know of:
Long-lived requests: render and flush the "please wait" bit of the page, but don't complete the request until the action has finished, at which point you can stream out the rest of the response with some javascript to redirect away or update the page.
Return immediately, and start processing on a background thread. The client polls the server (in javascript, or via page refreshes), and redirects away when the background thread finishes.
(1) is nice as it keeps the action all single-threaded, but doesn't seem possible with Tiles, as each JSP must complete rendering in full before the page is assembled and returned to the client.
So I've started implementing (2). In my implementation, the first request starts the operation on a background thread, using Spring's #Async annotation, which returns a Future<Result>. It then returns a "please wait" page to the user, which refreshes every few seconds.
When the please wait page is refreshed, the controller needs to check on the progress of the background thread. What is the best way of doing this?
If I put the Future object in the Session directly, then the poll request threads can pull it out and check on the thread's progress. However, doesn't this mean my Sessions are not serializable, so my app can't be deployed with more than one web server (without requiring sticky sessions)?
I could put some kind of status flag in the Session, and have the background thread update the Session when it is finished. I'm very concerned that passing an HttpSession object to a non-request thread will result in hard to debug errors. Is this allowed? Can anyone cite any documentation either way? It works fine when the sessions are in-memory, of course, but what if the sessions are stored in a database? What if I have more than one web server?
I could put some kind of status flag in my database, keyed on the session id, or some other aspect of the slow operation. It seems weird to have session data in my domain database, and not in the session, but at least I know the database is thread-safe.
Is there another option I have missed?
The Spring MVC part of your question is rather easy, since the problem has nothing to do with Spring MVC. See a possible solution in this answer: https://stackoverflow.com/a/4427922/734687
As you can see in the code, the author is using a tokenService to store the future. The implementation is not included and here the problems begin, as you are already aware of, when you want failover.
It is not possible to serialize the future and let it jump to a second server instance. The thread is executed within a certain instance and therefore has to stay there. So session storage is no option.
As in the example link you could use a token service. This is normally just a HashMap where you can store your object and access it later again via the token (the String identifier). But again, this works only within the same web application, when the tokenService is a singleton.
The solution is not to save the future, but instead the state of the work (in work, finished, failed with result). Even when the querying session and the executing threads are on different machines, the state should be accessible and serialize able. But how would you do that? This could be implemented by storing it in a database or on the file system (the example above you could check if the zip file is available) or in a key/value store or in a cache or in a common object store (Terracota), ...
In fact, every batch framework (Spring Batch for example) works this way. It stores the current state of the jobs in the database. You are concerned that you mix domain data with operation data. But most applications do. On large applications there is the possibility to use two database instances, operational data and domain data.
So I recommend that you save the state and the result of the work in a database.
Hope that helps.
Say, for example, you are caching data within your ASP.NET web app that isn't often updated. You have another process running outside of the app which ocassionally updates this data, when you do this you would like the cached data to be cleared immediately so that the next request picks up the new data straight away.
The caching service is running in the context of your web app and not externally - what is a good method of calling into the web app to get it to update the cache?
You could of course, just hack a page or web service together called ClearTheCache that does it. This can then be called by your other process. Of course you don't want this process to be externally useable or visible on your web app, so perhaps you could then check that incoming requests to this page are calling localhost, if not throw a 404. Is this acceptable? Could this be spoofed at all (for instance if you used HttpApplication.Request.Url.Host)?
I can think of many different ways to go about this, mainly revolving around creating a page or web service and limiting requests to it somehow, but I'm not sure any are particularly elegant. Neither do I like the idea of the web app routinely polling out to another service to check if it needs to execute something, I'd really like a PUSH solution.
Note: The caching scenario is just an example, I could use out-of-process caching here if needed. The question is really concentrating on invoking code, for any given reason, within a web app externally but in a controlled context.
Don't worry about the limiting to localhost, you may want to push from a different server in future. Instead share a key (asymmetrical or symmetrical doesn't really matter) between the two, have the PUSH service encrypt a block of data (control data for example) and have the receiver decrypt. If the block decrypts correctly and the data is readable you can safely assume that only the service that was supposed to call you has and you can perform the required actions! Not the neatest solution, but allows you to scale beyond a single server.
EDIT
Having said that an asymmetrical key would be better, have the PUSH service hold the private part and the website the public part.
EDIT 2
Have the PUSH service put the date/time it generated the cipher text into the data block, then the client can be sure that a replay attack hasn't taken place by ensuring the date/time is within an acceptable time period (say a minute).
Consider an external caching mechanism like EL's caching block, which would be available to both the web and the service, or a file to cache data to.
HTH.