I want to pass session from one application to other.
Like what gmail and orkut is doing, i want to do that.
Can anyone have some idea about how to do it?
Can it be possible without using any DB.
Assuming you want to control the entire pipeline - to accomplish this you need a centralized state server of some kind, which both sites can communicate with on the backend. For many smaller applications the database itself is used as a state server, but that is just one implementation of many. There are dedicated state server products, some free and some paid.
Even if both of your applications are on the same server, it's not possible with out-of-the-box ASP.NET functionality to share session directly because in-memory session is in process. However, running either of the two products listed above on the same physical machine as both your applications, and pointing both applications to that state server, will get you nearly there - it will be the same memory footprint as if they truly shared session, and the performance is extremely good (interprocess communication is blisteringly fast compared to network I/O).
Here is a more detailed description of the mechanics of a common authentication scheme between two or more sites.
What language are you using? If you are using ASP.NET on IIS, which I'd assume by your tags, you can do it using the machineKey attribute of <system.web> in the web.config file. It would look similar to this:
<system.web>
<machineKey validationKey="(here)" decryptionKey="(here)" validation="SHA1" />
</system.web>
Then, you can use the enableCrossAppRedirects="true" attribute on <forms> authentication type if you'd like the authentication keys to be passed between apps.
Also, if you'd like to generate a machineKey, you can use Scott Forsyth's tool at http://www.orcsweb.com/articles/aspnetmachinekey.aspx
Finally, as the first answer was posted, you can implement the ASP.NET state server for better control of the application's state. The ASP.NET team will be releasing Velocity soon which handles distributed caching. There are also third party tools for both.
Related
I need to create a web application that uses WIF to communicate with ADFS in order to login users. This web application supports multi-tenancy, accordingly, the same code base will be used to serve requests to site1.mydomain.com and site2.mydomain.com.
Currently, my WIF configuration is in the web.config file which is preventing me from achieving multi-tenancy. So I thought maybe there's a way to provide all the required WIF configuration through code by reading the host name from the request url and retrieving the tenant's configuration from the database instead of the web.config file.
Is that even possible? Any ideas or thoughts?
You migth get some ideas from this similar post :
how do i move federated configuration out of the web config
AFAIK the FederationConfigurationCreated is called only once per application. This means that you will need to "wire" things like a custom securitytokenhandler, cookiehandler, certificatvalidator etc that do their work based on the current context. I would personally consider all of this "doable" but it migth take you 1 to 2 months to get all the sharp edges out of it. I mean, writing a securitytokenhandler is doable but it will be simpler when you have done so before. You will need to dive really deep in WIF and want to consider whether that is what you want.
As an alternative (that you probably don't want) you migth consider a deployment per tenant. Depending on the number and volatility of tenants this migth or migth not be a good idea.
I have been asked the following question a couple times and feel like I could have had a better answer for it so I'm relaying it here in hopes of finding more resources, blogs books or pointers on writing scalable MVC3 C# code. If you have any pointers on writing better performing code that is hosted across multiple servers I would greatly appreciate it. For the sake of argument lets say it would be for code that expects upwards of 10-20K hits a day.
Question:
What coding considerations do you take into account for writing scalable code that is distributed over several servers?
My gut tells me the answer lies in handling session. My background over the last few years has been in writing services and forms applications not as much for web applications so I'm looking for information that can help me with web application specific development particularly for C# MVC3. Any blogs or books that you suggest I'll definitely look into!
One of the rules for implementing scalable web applications is for them to be stateless. Session is the first thing that should be thrown out of the equation as this is exactly what makes an application stateful. If you have a completely stateless application you could throw hardware when traffic increases and the application will be able to handle it. So start by putting the following line in your web.config:
<system.web>
<sessionState mode="Off" />
...
</system.web>
The problem will now lie on the data tier as this is where the state goes. So in order to improve performance and limit the number of requests to this node would be to use caching. Cache as much data as you can. Preferably store this cache on separate machines than the web servers. Dedicated machines for doing caching.
I came across a case study few days early. It is related to a web application architecture.
Here is the scenario,
There is a single web service used by say 1000 web applications. This web service is hosted on a particular server. If web service hosting location is changed, how the other applications come to know about this change ?
Keeping it in web.config doesn't seems to be a feasible solution as we need to modify web.config files for all the applications.
Keeping these settings in a common repository and let all the applications use it for web-service address was came in my mind, but again there is a question of storing this common repository.
I am just curious to know about how this could be achieved with better performance.
Thanks in advance for any kind of suggestions.
do you have full access or control over all those web applications consuming that web service? if so, you could have a script or some custom code which updates all their web.config(s) at once. it seems too much work but in fact in this way you have more control and you could also, eventually, point to the new url only some applications and leave some others on another url.
the idea with the setting in a centralized database gives you faster update propagation which could also be bad in case of errors and then you have all applications referring to the same place and no way to split this. Then you have anyway to connect to a centralized database from all of them and maybe you should add a key to their web.config(s) with the connection string to that database, then, in case that database is not reachable or is down, the web applications will not be able to consume the web service simply because they cannot get the url of it.
I would go for the web config, eventually you could have a settings helper class that abstract the retrieval of that url so the UI or front end does not know from where that url comes from.
anyway, do you plan to change the url of a web service often? wouldn't be better to copy it to a new url but to also keep it available on the current url for a while?
another advantage of web.config approach is that everytime you update and save it the application is restarted while a change in a database might take a while to be detected in case you have some caching mechanism,
hope this helps.
Davide.
I just discovered ASP.net uses its own profile system to register users and there seems to be a lot of features available as bonus with it (such as secure authentication). However it seems rather specific to have such a feature for a general purpose development environment and things which work in the background the way the profiles system does without me really knowing how (like where the user data is stored) kind of scares me.
Is it worth developing a website which requires user authentication using the asp.net profile system or would it be better to develop my own using SQL databases and such? I'm not going to avoid using SQL anyway, even if I use profiles I'll use the profiles unique ID to identify user data in the SQL table so in that sense I'm not going to avoid using SQL for user information at all.
My favorite thing about profiles is that you can create custom permissions in Web.config files using them () and avoid having to type in the same code to the top of all your aspx source files to do the authentication check.
The other thing I kind of like about it is that security is built in with secure authentication cookies, so I wouldn't have to deal with them myself.
But it doesn't seem like that big of a deal really. I'm just confused as to where profiles stand as far as ASP.Net development goes and what they're designed to accomplish.
The Profile/Membership and Role provider API is very intertwined, and specifies things very narrowly. The benefit is that there is little you have to do to get a lot of functionality working. The disadvantage is when what you need doesn't match what is provided. Nevertheless, there are many potential gotcha's that the API takes care of for you that it really does make sense to use it, at least for authentication.
My needs did not match what the API provided, and I really only needed the Membership portion. The problem is that I had a piece where I needed to use the same authentication and authorization across a web application and a desktop application. My needs are pretty unique, but it's designed for a classroom setting.
Getting the membership to work for my needs wasn't that difficult. I just had to implement the Membership API. There are several features I just didn't need with the Membership API like self-registration, etc. Of course this did present me with a challenge for role management. Typically, as long as your user object implements IPrinciple it can be used directly--but there are serialization issues with the development web server Visual Studio packages if your user class is not defined in the same assembly. Those problems deal with serialization, and your choices include putting the object in the GAC or handle cross-appdomain serialization yourself with objects that are in the GAC like GenericPrincipal and GenericIdentity. That latter option is what I had to do.
Bottom line is that if you don't mind letting the API do all the management for you, than it will work just fine. It is a bit of smart engineering work, and attempts to force you down a route with decent security practices. I've worked with a number of different authentication/authorization APIs (most were not CLR based), and the API does feel a bit constraining. However, if you want to avoid pitfalls with session/state/cache management you really need to use the API and plug in your own providers as necessary.
With your database, if you need to link a user with any database element you'll be storing the user's login id (Context.User.Identity.Name).
You seem to mix the Profile/Membership/Role provider API. But to answer your question: why not use it? I would use it unless there is a real constraint that makes it unusable...
I have an asp.net webforms application in production server and it was really slow. So i decided to get some performance tips from my fellow SO users.
I ve applied these to increase my asp.net website performance,
Set debug=false
Turn off Tracing
Image caching
<caching>
<profiles>
<add extension=".png" policy="CacheUntilChange" kernelCachePolicy="CacheUntilChange" location="Any" />
<add extension=".jpg" policy="CacheUntilChange" kernelCachePolicy="CacheUntilChange" location="Any" />
<add extension=".gif" policy="CacheUntilChange" kernelCachePolicy="CacheUntilChange" location="Any" />
</profiles>
</caching>
Any other real performance booster do you know? Any Suggestion...
A webpage can be fast only by design.
A simple option can not make your page load faster. The debug=off only eliminate the extra debugging functions and actually if you are not use them can not make many thinks.
I agree with all that Paul say, and you can found them here with more details and I have to say as extra...
You need to follow some guide and do a lot of work to make them real fast.
What I follow.
I use (custom) cache for my database action that really boost the data loading speed, but make at the same time the code to much more and I have spend a lot of time.
I have use profiler to find my slow points on the page and correct them.
I use the Inspector on Google Chrome Browser to locate slow loading and double loading problems.
I have eliminate the double use/create of any variables on custom controls.
I use cache on client browser base on this suggestions.
I use webfarm and/or webgarden (more than one pool).
How to see your page speed: http://code.google.com/speed/page-speed/docs/using.html
Optimize cache: http://code.google.com/speed/page-speed/docs/caching.html
Many general topic from google can be found here: http://code.google.com/speed/articles/
About caching: http://www.mnot.net/cache_docs/
Hope this help.
Not directly ASP.NET, but
1) Make sure compression is enabled within IIS.
2) If your site is cookie "heavy" host static files (images, CSS and JS) within a separate domain. Each request back to the server needs to send all site cookie information back to the server. So if your cookie usage is 10kb+ then 20 static file references within the page will result in an extra 200kb being sent back to the server in total. If you move the static files over to a domain which has no cookie requirements you remove this overhead. It is worth noting that due to a "fault" in how IE processes things, you don't get any benefit in using subdomains, IE appears to insist in sending all domain cookies to sub domains. An additional benefit to this is allowing more HTTP requests in parallel
Stop hacking your production server (that is likely to introduce functional bugs) and take a step back. Can you reproduce the performance problems in a non-production environment? If not, then do the work to try.
You should try to reproduce the problem as follows:
Get production-grade hardware in your test environment - web and database servers etc - running the same hardware as production
Run the same software set as production - this includes the same configuration of ASPNET and all other servicces used.
Load production-size data (production data if possible) into your databases (remember to firewall your lab from the internet so that it cannot send mail or other things to the internet, or your users might start receiving email notifications from the test system which would be bad!)
Create simulated traffic to the site to production level - this is potentially quite tricky but there are a lot of tools available
Now you've got a fighting chance to repro the problem in testing, you can try solutions.
Usually database-driven web sites are bottlenecked by the database, so I'd start there. The main tricks are
Do fewer queries
Optimise the queries you do do (fetch less data, use appropriate indexes etc)
Change the database structure so that the queries you have to do are easier for it (clustered indexes etc, denormalise possibly)
But any change you make, try it on your test system, measure the results, and if it doesn't help, ROLL IT BACK.
In general configuration changes are likely to make only minor differences, but you can try those too.
If all this sounds like too much effort, try throwing hardware at the problem - developer time is much more expensive than hardware. In the time it takes you to do the above (could be months depending on the complexity of the app) you could have bought some meaty production boxes. But be sure that it's going to help.
Does your database fit in RAM? Could it possibly fit in ram? If the answers to those questions are no and yes respectively, buy more ram for the database. This is one of the cheapest ways of making your db go faster without code changes.