I was discussing load-balancing with a colleague at lunch. I admit that I know very little about this topic. We were discussing the various ways of maintaing session in a ASP.NET application -- none of which suited the high performance load balancing that he was looking for.
What about Silverlight? says I. As far as I know it is stateless, you've got the app running in the browser and you've got services on the server that feed/process data.
Does Silverlight totally negate the need for Session state management? Is it more friendly to load-balancing? Is it something in between?
I would say that Silverlight is likely to be a little more load-balancer friendly than ASP.NET. You have much more sophisticated mechanisms for maintaining state (such as isolated local storage), and pretty much, you only need to talk to the server when (a) you initially download the application, and then (b) when you make a web service call to retrieve or update data. It's analogous in this sense to an Ajax application written entirely in C#.
To put it another way, so long as either (a) your server-side persistence layer knows who your client is, or (b) you pass in all relevant data on each WCF call, it doesn't matter which web server instance the call goes to. You don't have to muck about with firewall-level persistence to make sure your HTTP call goes back to the right web server.
I'd say it depends on your application. If it's a banking application,then yes I want something timingout out after 5 minutes and asking for my password again. If it's facebook then not so much.
Silverlight depends on XMLHttpRequest like any other ajax impelementation and is therefore capable of maintaining a session, forms authentiction, roles, profiles etc etc.
The benefit you are getting is obviating virtually all of the traffic. json requests are negligable compared to serving pages. Even the .xap can be cached on the client.
I would say you are getting the best of both worlds in regards to your question.
Related
I'm evaluating various hosting options for ASP.NET Core application.
In the new programming model of ASP.NET you process a request with a set of middlewares (which are mixture of older IHttpModule & IHttpHandler).
You can have a middleware which can be responsible for authentication, handling of static files or compressing the response before sending (just to name some).
Here comes the confusion.
Where to set a border between server and an app in context of responsibility?
Which side should be responsible for compressing the response? With IIS this was handled by the server and configured in web.config. Kestrel doesn't provide this functionality AFAIK, so you need to implement a custom middleware in the app which will handle this for you. Which one is more appropriate?
What about authentication? IIS provides settings for authentication (anonymous, impersonation, forms auth). On the opposite, in ASP.NET Core we can also write an app middleware which can handle this for us.
Ok, SSL is handled by server, because it's below in the protocol layer and app operates on HTTP(S) only.
What responsibilities should server have? What responsibilities should an app have?
The server is responsible for implementing the base HTTP protocol, managing connections, etc.. It may also choose to offer other features (e.g. windows auth), but we recommend against it unless it can provide a distinct advantage over a middleware implementation. E.g. Windows auth could be implemented in middleware, but it would be much more difficult due to some of the connection management constraints. Compression could be implemented in middleware just as easily as in the server.
As stated on wikipedia:
"The primary function of a web server is to store, process and deliver
web pages to clients"
The thing is that all famous http servers (nginx, apache, IIS, ...) come with a lot of modules that can handle lots of different tasks including the ones you mentioned in your question (authentication, compression, ...).
It's quite likely that the more modules you'll add the slowest your http server will be. IIS for instance is, by far, not known to be the fastest http server around, but if you remove all the modules and use it just for serving resources, then it will become really fast because this what it has been built for back in the days!
The problem of responsibility goes the same with all kind of software application.
Think about databases whose main role is to store data. RDBMS like Oracle or SQL Server are pretty good at it. But as soon as they release a new version, they also release a new functionality that has nothing to do with storing data. And people use it! ;-)
How many times people used their DB as a search engine? I saw people sending mails with SQL Server! But the worse was some guys trying to call webservices within store procedures ;-)
It's always tempting to have one tool to do everything but you need to keep in mind that it has not been built for every purpose. I'd rather use a bunch of lightweight tools that have one single responsibility and that handle it correctly instead.
Now back to your question, I think it's a good approach to make use of middlewares. That way you have control on the entire pipeline and you know exactly what your request have been through. Middlewares are also testable! Getting rid of all the unnecessary modules will definitely lead you to a more lightweight http server.
The righteous "it depends" answer is also acceptable. If you make some tests and realize that gzip compression module is 10x faster than the middleware, go with the module! Don't be dogmatic neither!
I have a scenario where I have multiple websites using a commnon dll for authentication and general user detail fetching.
I now need to update the common dll with a slightly different login logic and it means I'll need to push this new dll into every website and do a release process for each.
I'm wondering whether it's better to host the common authentication methods in a webservice of some sort then have the websites call that internally. Would it be an internal web service? ajax callbacks from an server side only website? Or stick with the dll method to ensure code changes doesn't break the sites?
Are there any security concerns when not using a dll for this kind of task?
Using a webservice seems a good way to do that. I will cause less memory usage and can be updated independantly from the wesites (if ever needed).
You could go for a WCF services (with dual tcp?) maybe.
Both approaches work in my opinion, but there are significant differences between them that we should keep in mind.
First of all, all of this depends also on the language you are using, because sometimes the best theoretical answer is not always easily implemented in each and every language, making it practical unusable.
So, with this in mind, the best way for me is to have an internal web service, who deals with all requests regarding this "authentication and general user detail module", assuming that all websites use the same DB (or data layer) (otherwise, you will need to create a web service for each one and it's another completely different story). This approach will give you flexibility and maintainability. You could use direct ajax requests to this web service, or make the calls from you website server, and them reply to the browser already with that information. (this second option is more time consuming but much more secure, and if it is a real internal web service (ie hosted on the same machine, the lag will not be noticeable)).
The dll approach should if you need to apply the same business logic to different services. In practical terms: you have two completely separated web sites, that use the same kind of authentication logic. Keep in mind that using this approach to websites that use the same data layer, will force you most of the times to have "deprecated ways" working together with the new implementations, while you are updating the dll on all websites.
Regards,
Most if not all of the NSB examples for ASP.NET (or MVC) have the web application sending a message using Bus.Send and possibly registering for a simple callback, which is essentially how I'm using it in my application.
What I'm wondering is if it's possible and/or makes any sense to handle messages in the same ASP.NET application.
The main reason I'm asking is caching. The process might go something like this:
User initiates a request from the web app.
Web app sends a message to a standalone app server, and logs the change in a local database.
On future page requests from the same user, the web app is aware of the change and lists it in a "pending" status.
A bunch of stuff happens on the back-end and eventually the requests gets approved or rejected. An event is published referencing the original request.
At this point, the web app should start displaying the most recent information.
Now, in a real web app, it's almost a sure thing that this pending request is going to be cached, quite possibly for a long period of time, because otherwise the app has to query the database for pending changes every time the user asks for the current info.
So when the request finally completes on the back-end - which might take a minute or a day - the web app needs, at a minimum, to invalidate this cache entry and do another DB lookup.
Now I realize that this can be managed with SqlDependency objects and so on, but let's assume that they aren't available - perhaps it's not a SQL Server back-end or perhaps the current-info query goes to a web service, whatever. The question is, how does the web app become aware of the change in status?
If it is possible to handle NServiceBus messages in an ASP.NET application, what is the context of the handler? In other words, the IoC container is going to have to inject a bunch of dependencies, but what is their scope? Does this all execute in the context of an HTTP request? Or does everything need to be static/singleton for the message handler?
Is there a better/recommended approach to this type of problem?
I've wondered the same thing myself - what's an appropriate level of coupling for a web app with the NServiceBus infrastructure? In my domain, I have a similar problem to solve involving the use of SignalR in place of a cache. Like you, I've not found a lot of documentation about this particular pattern. However, I think it's possible to reason through some of the implications of following it, then decide if it makes sense in your environment.
In short, I would say that I believe it is entirely possible to have a web application subscribe to NServiceBus events. I don't think there would be any technical roadblocks, though I have to confess I have not actually tried it - if you have the time, by all means give it a shot. I just get the strong feeling that if one starts needing to do this, then there is probably a better overall design waiting to be discovered. Here's why I think this is so:
A relevant question to ask relates to your cache implementation. If it's a distributed or centralized model (think SQL, MongoDB, Memcached, etc), then the approach that #Adam Fyles suggests sounds like a good idea. You wouldn't need to notify every web application - updating your cache can be done by a single NServiceBus endpoint that's not part of your web application. In other words, every instance of your web application and the "cache-update" endpoint would access the same shared cache. If your cache is in-process however, like Microsoft's Web Cache, then of course you are left with a much trickier problem to solve unless you can lean on Eventual Consistency as was suggested.
If your web app subscribes to a particular NServiceBus event, then it becomes necessary for you to have a unique input queue for each instance of your web app. Since it's best practice to consider scale-out of your web app using a load balancer, that means that you could end up with N queues and at least N subscriptions, which is more to worry about than a constant number of subscriptions. Again, not a technical roadblock, just something that would make me raise an eyebrow.
The David Boike article that was linked raises an interesting point about app pools and how their lifetimes might be uncertain. Also, if you have multiple app pools running simultaneously for the same application on a server (a common scenario), they will all be trying to read from the same message queue, and there's no good way to determine which one will actually handle the message. More of then than not, that will matter. Sending commands, in contrast, does not require an input queue according to this post by Udi Dahan. This is why I think one-way commands sent by web apps are much more commonly seen in practice.
There's a lot to be said for the Single Responsibility Principle here. In general, I would say that if you can delegate the "expertise" of sending and receiving messages to an NServiceBus Host as much as possible, your overall architecture will be cleaner and more manageable. Through experience, I've found that if I treat my web farm as a single entity, i.e. strip away all acknowledgement of individual web server identity, that I tend to have less to worry about. Having each web server be an endpoint on the bus kind of breaks that notion, because now "which server" comes up again in the form of message queues.
Does this help clarify things?
An endpoint(NSB) can be created to subscribe to the published event and update the cache. The event shouldn't be published until the actual update is made so you don't get out of sync. The web app would continue to pull data from the cache on the next request, or you can build in some kind of delay.
excuse me if i will sound little stupid but this thing had confused me to the core and i have been searching like crazy on net with no ultimate answer so i hope some one would shed more light on this matter.
now i wanna create a portal site and my client require that everything should be AJAX'ed so i have been playing with ASP.NET AJAX 4 and client site templating and web service, and of course the performance is great with JASON results, but my web Service code will be Public because anything available to JAVA script is available to anyone so as i read to avoid this i must :
use SSL but this is a portal site and front end should not use SSL
Authentication, will this is fine but for back-end and not front-end as login is not required.
after reading a lot as i have mentioned, i have come to the following pitfalls when using web service with AJAX and hope there is a solution or at least a way to bring more security
DOS:
i have read some articles that suggest you should throttle using IP detection and block this request for a while but here are some of the things i am worried about
will it affect search engine crawlers ?
will the hacker be able to bypass this by using proxy or other mean ?
Session HighJacking:
this is scary i still don't know how this can happen when you are using ASP.NET membership, i thought it is a pretty solid membership system!
and will a hacker be able to steal someones pass through this method?
a way to hide your code or encrypt it:
i think i am making a fool of myself here because i have mentioned that if it is public to java script then it is public to anyone, but my client would not want people to see the code writing logic and function.
Hide Webserivce:
like if you use fiddler you can see in the RAW data the path to for example www.mysite.com/toparticles/getTopArticles(10) again this scares my client and i have tried to disable WSDL and documentation in webconfig but this only blocks direct access to the file and nothing more or am i wrong! is there a way to hide the path to web service?
so all in all my top concerns is to prevent hammering any of my web services and hide my code as much as possible.
so am i to paranoid as on the front end i am going mostly to be pulling Data but again i may give user the option to save for example his widget preferences in DB, etc... and it is not gonna be through SSL thus the above security threats are valid.
i hope some one can also share his experience on this matter,
thanks in advance.
Any functionality exposed on the web is going to be, well... exposed on the web. Even if you were using pure ASP.NET with postbacks, sniffers can see the traffic and mimic the postbacks, Ajax just takes that to its logical extreme. Webservices are (for the most part) just like any other get/post system (RESTful or not).
There are some methods that you can use to secure your webservices from unauthorized access, but in truth these are the exact same things you would do to secure any other site (asp.net, traditional web, etc).
There are lots of articles all over the web about how to secure your website, and these will apply equally well to AJAX, Webservices, etc.
If you are really concerned about your webservices being publicly exposed, you can use your own custom reverse proxy to hide the services inside the customers network and only expose the proxy to the outside world. You can then secure the services so they are only accessed through the proxy and provide whatever appropriate security on the proxy you feel relevant. In this way all traffic comes through servers that you specify and is restricted (to a reasonable degree) from prying eyes. In general though I think this might be over-kill especially for a portal site.
One thing to talk with your client about is the upsell value of using webservices as a sellable product to integrators. In other words, designing the security into the webservices and using the portal only as an example of how to put it all together. A clear example of this is SharePoint, which is in truth a collection of webservices and processes and the website is really just for convenience, the power of SharePoint is in the interop of the services.
For more specific answers to your security questions, there are plenty of posts here on SO as well as the web covering each of your specific points.
We’ve got a back office CRM application that exposes some of the data in a public ASP.NET site. Currently the ASP.NET site sits on top of a separate cut down version of the back office database (we call this the web database). Daily synchronisation routines keep the databases up-to-date (hosted in the back office). The problem is that the synchronisation logic is very complex and time consuming to change. I was wondering whether using a SOAP service could simply things? The ASP.NET web pages would call the SOAP service which in tern would do the database calls. There would be no need for a separate web database or synchronisation routines. My main concern with the SOAP approach is security because the SOAP service would be exposed to the internet.
Should we stick with our current architecture? Or would the SOAP approach be an improvement?
The short answer is yes, web service calls would be better and would remove the need for synchronization.
The long answer is that you need to understand the technology available for you in terms of web services. I would highly recommend looking into WCF which will allow you to do exactly what you want to do and also you will be able to only expose your services to the ASP.NET web server and not to the entire internet.
There would be no security problem. Simply use one of the secure bindings, like wsHttpBinding.
I'd look at making the web database build process more maintainable
Since security is obviously a concern, this means you need to add logic to limit the types of data & requests and that logic has to live SOMEWHERE.