How to configure a drop in ASP.NET "plugin" - asp.net

We're developing a plugin for ASP.NET web apps that can, ideally, be dropped into the folder that makes up the public facing part of our client's web site, i.e. a foo.aspx file that becomes publicly accessible at, say, /foo.aspx (and can thus be accessed by our browser component written in JS and loaded into the client's website).
However, we need something we're not really sure how to do idiomatically, which is the ability to pass a few configuration parameters to foo.aspx in a way that, preferrably, reuses whatever configuration mechanism our client's already using (we're assuming a sufficiently recent version of IIS) or at least something that's considered standard and can be applied to any IIS deployment.
Is IIS metabase something to look at? We need a way to pass in an SSL client certificate file (or path thereof) to foo.aspx so that it can talk back to our HTTP REST API via a secure channel. It's also not entirely obvious where's a good and standard disk location to drop the certificate file itself to.

Related

Hosting WebAPI on a HTTPS Azure Web Role and HTML Javascript on HTTPS Azure CDNs?

I have an AngularJS WebAPI application that has a Javascript front-end. The front end makes calls to the back-end WebAPI for data. In the future there may well be more than one front-end making calls to the back-end.
I would like to change this application to use HTTPs and am looking into how to best architect this. The way I see it there are two ways (maybe more).
(1) Host the WebAPI C# application, index.html, Javascript, Javascript libraries and other HTML on the one (or more) web roles.
(2) Host the index.html, Javascript, Javascript libraries and other HTML on a CDN and put the WebAPI C# application on one (or more) web roles at one location.
From a performance point of view are there likely to be any issues with the split solution (2) when I am using SSL. Is there anything that I should consider that might help improve the start-up time (my goal is for this to be as fast as possible).
One more question I have. If I were to use the Azure CDN then would I still be able to address the index of my web site as www.mywebsite.com and if using HTTPS would I need a SSL certificate?
Option 2 is more preferible.
You have to think, that your application is what sits in the backend. The front end is just a suggested set of UI controls and interactions to consume that application you have. Then, if you can host them separately you have some benefits, starting by not creating UI dependency.
The approach would be like creating a thin client.
Since the application is AngularJS based, probably all the UI are static files with HTML, CSS, and Javascript. You can host them in BLOB storage, and scale it through the CDN. You can have a custom domain name pointing to Azure Blob Storage, like `www.yourdomain.com. It has many benefits, including better price and scaling than web roles. Put aside, that you pay for web roles no matter if you are getting hits or not. The only downside is that as far as I know, it would not be possible to use HTTPS, but that should not be a problem, since you are just hosting static content and templates that contains placeholders, no actual data.
On Blob storage, you can attach your own cache control headers, allowing the browser to cache those files locally. Then a user would download those files once, and be recovered from the browser cache next times. Also, you can store the content already compressed in GZIP, and then set the content encoding property to let the browser know it is compressed, therefore enabling a faster content download. Not forget you should bundle your resources. For example, you should bundle all your JS code in one JS file, all your CSS code in one CSS file, and all your AngularJS views should be bundled in the template.js file (also bundled into the unique JS file).
You need to host your backend application in worker/web role instances though. Here you can use HTTPS, and it would be no problem to use AJAX over HTTPS, although the page loaded on HTTP as long the SSL/TLS certificate is signed by a CA recognized by the browser (ie: a valid certificate). If you use a self-signed certificate, there will be no way for the browser to prompt the user to accept it. Keep this in mind if you plan to start with a self-signed one.
So then you would have all the things that are not user/state dependant in blob storage, that is cheap, fast and highly scalable; and all your user data interaction would happen through your worker/web roles through compact data request/response probably in JSON. Therefore you need less web/worker roles for providing the same level of service.
Now, if you have a very asymmetrical amount of massive queries and data changes request, you should consider an approach like CQRS.

Flash Channel.Security.Error Unable to access remote resource

I tried to deploy my otherwise working flex app on a web server (tomcat 6). It threw a Channel.Security.Error. After some research, I became aware that flash movie loaded from flash_movie_domain will not be able to load resource from any other domain. Some suggested adding a crossdomain.xml. However, the crossdomain.xml route doesn't quite make sense.
In this case, I am loading resources from a third party web site. My understanding is that I need this third party website to include a crossdomain.xml on their root directory in order for app to function. The third party web service is provided as is. I will not be able to change what's given. Since the third party is providing public access, it already explicitly give permission to the general public. Adding a crossdomain.xml to their root seems to be a redundant act?
At the end of the day, I need to figure out a way to access the third party web service from a flash movie loaded from my domain. Thanks.
It sounds like you already have your answer.
This third party web site will need add a crossdomain.xml file that will allow the Flash Player to access data from this third party domain.
I'm unclear how this third party web site has provided you permission to access their data. But, the Flash Player is placed in a sandbox by the browser. The crossdomain.xml file gives permission for the Flash Player to move out of it's sandbox for the purpose of accessing the remote domain.
There is nothing redundant about saying something can do done; and giving providing the technical tools to help make it happen.
Your alternative is to not access the site from Flash. You might be able to use an intermediary proxy to retrieve the data and send it back to flex. But, it depends on the type of data.

ASP.NET + Configuring Entity Tags

I have developed an ASP.NET web application that I'm working on putting the finishing touches on. To assist with this, I have been using YSlow. With this tool, I have discovered that I have not properly configured the entity tags of the components on my pages. Unfortunately, I have no idea how to do this.
How do I configure entity tags on components within an ASP.NET page?
Here is what YSlow says:
There are 28 components with misconfigured ETags
http://localhost:81/resources/page.js
http://localhost:81/resources/images/bg.png
http://localhost:81/resources/images/app.png
...
Entity tags (ETags) are a mechanism web servers and the browser use to determine whether a component in the browser's cache matches one on the origin server. Since ETags are typically constructed using attributes that make them unique to a specific server hosting a site, the tags will not match when a browser gets the original component from one server and later tries to validate that component on a different server.
Thank you!
This is not really an ASP.NET issue since ETags (at least by default) are emitted by IIS in response to requests for static files. The few examples you've given are all static files (JS, PNG, etc).
Exactly why your ETags are misconfigured is difficult to say but, at a guess, I'd say you're hosting your site in a web farm (more than one web serer) and each server is generating its ow ETag and thus making them less than useful.
See here for some more info: http://developer.yahoo.net/blog/archives/2007/07/high_performanc_11.html

ASP.NET gone FUBAR on a production machine

Today we tried to put an ASP.NET application I helped to develop on yet another production machine. But this time we got a very weird error.
First of all, from all the ASP.NET pages, only Login.aspx was working. The rest just show a blank screen when they should have redirected to Login.aspx. The HTTP response is 200, but no content.
Even worse - when I try to enter the address of some inexistent ASPX page, I also get HTTP 200! Or, when I enter gibberish in some existing ASPX page code (which should have been accessible without login) I also get HTTP 200.
If I enter the name of some inexistent resource (like asdasd.jpg), I get the expected 404.
The redirect to login page is written manually in Global.asax. That's because the application has to use some alternate methods of authentication as well, so I can't just use Forms Authentication. I would suspect that Global.asax is failing, if not for the working Login page.
Noteworthy facts are also that this machine is both a Domain Controller and has SharePoint installed on it. Although the website in question is listed in SharePoint's exception list.
I would check the following:
Is the application within a virtual application or its own site and not just a virtual directory?
Does the application have it's own App Pool? If it does not then is the app pool shared by apps in a different .net version.
Is the .net version of the application the correct one? 1.1 or 2.0?
Do the files in the file system have the correct permissions to be accessed via IIS?
Have you performed an IIS Reset?
Create a stand alone test.aspx page within your folder that just displays the date/time and check it works.
Make this single test.aspx page perform an exception (eg. divide by zero) and see what the outcome is.
More information required.
What Op Sys?
What mode IIS running under?
What version of .Net?
What version of SharePoint?
(Why are you using your DC as a web host?)
Does it work on the other production machines you've deployed to?
If so what is different between this machine and the working ones?
Did you deploy the same way?
Are you sure your hitting the right machine?
Are you sure your hitting the right web site?
What ISAPI components are installed globally and for the web site?
Is .aspx mapped to the ASP.Net ISAPI filter?
Do you have any HTTP Modules or HTTP Handlers configured?
Can you change the global aspx to write out some messages so you can be sure the piece of code you interested in is reaching?
Anything coming up on the IIS log or the event logs?
Addition:
What version of .Net?
By the sounds of it the .jpg request is being dealt with by IIS directly which is why you get the 404, but the .aspx request is being dealt with by something else which except for you login page, is always returning 200.
Assuming .aspx is wired correctly to .Net the the order of processing is based on ISAPI filters (high to low then global before site), then the ASP.Net ISAPI Extension (sorry I said this was a filter earlier but it's actually an extension). Then we get into the ASP.Net pipeline based on your .Net configs, and calls the HTTP Application (which includes your global.asax code), any HTTP Modules followed finally by a HTTP Handler. Your ASP.Net web forms are just fancy HTTP Handlers.
However, the request can be responded to and terminated from any point.
Since your code works on other machines though, I'm tempted to point a finger at SharePoint if it isn't installed on the working machines. Is this SharePoint 2007? That is also an ASP.Net application (I don't think 2003 was).

Re-publishing an ASP.NET Web Application While Site is Live

I am trying to get a grasp on how to handle updates to a live, functioning ASP.NET (2.0 or greater) Application while there are users on the site.
For example, suppose SO is an ASP.NET Web Application project. The project code compiles down to the single .DLL in the BIN folder. Now, there are constantly users on SO, so what would happen to users' actions/sessions if you would use the Visual Studio .NET "Publish" feature (or just FTP everything again manually) while they are using the site?
Would creating an ASP.NET Web Site, instead, alleviate any problems that may or may not exist with the scenario above? I am beginning to develop a web site as a user-driven Web Application, and I want to make sure that my inexperience with this would not potentially annoy the [potentially] many users that I [want to] have 24/7.
EDIT: Sorry, I should have put this in a more exact context. Assume that this site is being hosted by a web hosting service with monthly fees. I won't be managing the server itself, just what the web host allows as a user of their services.
I create two Web sites in IIS. One is the production Web site, and the other is a static Web site with an HttpHandler that sends all requests to a single static "We're updating" HTML page served with an HTTP 503 Service Unavailable. Typically the update Web site is turned off. When it's time to update, we stop the production Web site, start the update Web site, and now we can fiddle with the production Web site all we want without worrying about DLLs being locked or worker processes needing to be spun down.
I started doing this because
App_Offline.htm really does not work well in Web Gardens, which we use.
App_Offline.htm serves its page as 404, which is bad if you're down for a meaningful period of time.
We can start the upgraded production Web site with modified settings (only listening on localhost), where we can do a last-minute acceptance/verification that everything is working before we flip the switch, turning off the update Web site and re-enabling the production Web site.
Things this does not solve include
Any maintenance that requires a restart of the server--you still have downtime where no page is served.
Any maintenance that diddles with the .NET runtime, like upgrading to the latest service pack.
Other approaches I've seen include
Having two servers. Send all load balancing requests to one server, upgrade the other one; then rinse and repeat. Most of us don't have this luxury.
Creating multiple bin directories, like bin-1.0.0.0 and bin-1.1.0.0 and telling ASP.NET which bin directory to use in the web.config file. (One advantage of this is that reverting to a previous binary is just editing a config file. A disadvantage is that it's harder to revert resources that don't end up in your binaries, like templates and images and such.) I don't remember how this actually worked--I think the application did some late assembly loading in its Global.asax based on its own web.config section (since you touched the web.config, the app had restarted, so it was okay).
If you find a better way, let me know!
Changing to the asp.net web site model won't have any effect, as the recycle will also happen, some of changes that trigger it for sure: web.config, global.asax, app_code.
After the recycle, user will still be logged in because asp.net will just validate the syntax. That is given you use a fixed machine key, otherwise it will change on each recycle. This is something you want to do anyway as other stuff can break if the key change across requests i.e. viewstate validation, embedded resources (decryption of the url fails).
If you can put the session out of process, like in sql server, you will avoid loosing the session. If you can't, your code will have to consider that. There are plenty of scenarios where you can avoid using session, and others were you can wrap it and re-retrieve the info if the session was cleaned. This should leave you with a handful specific cases that you know can give trouble to the users, so for those you do some of the suggestions others have already made.
One solution could be to deploy your application into a load balanced environment (web farm).
When deploying a new version you would use the load balancer to redirect requests to the server you are not deploying to.
App_offline.htm is great solution for this I think.
in SO we see application currently unavailable page when a deployment begins.
I am not sure how SO handles it.. But we usually put a holding page. So what ever the user has done (adding question or answering questions) does not get updated. As soon as he updates something he will see a holding page asking him to try after sometime.
And if I am the user I usually press the back button to make sure what I entered is saved in the browser history so that I can post later.
Some site use use are in clustered environment so I take one server offline and inform the load balancer that she will not be available and once I make sure that the new version is working fine I make it live.. I do the same thing for the next server.
Do we have any other option?
It is not a technical solution, but set up a scheduled maintenance window. You can annoucement in advance giving your user base fair warning that there is a possiblity that the application will not be available during that time frame.

Resources