After I migrated my site from AzureWebsites to a Azure Cloud Service, i'm having some issues with the migration and I think are related to IIS (and my lack of expertise of it).
The issue is some times i can't display a image that works just fine. One test method to get that is to refresh many times the image's url and sometimes I'm forward to PageNotFound.
I've an url path for images /Clients/{id}/Images to get the avatar, which is the same relative path for the folders.
I already gave full permissions, ignored this route to no avail. Any suggestions?
Are you by chance letting your clients upload images directly to your IIS machines? If so, do remember that Azure Cloud Service Roles run on distributed machines... so, if one of the web servers gotten the image to its local store, the others did not.
Thus, you want to make sure you store your uploaded content on shared resource, in Azure's case the best place is Blob storage.
HTH
Related
In an upcoming version of a currently-in-development webapp, I need to serve multiple domains from a single site. The code on the site will recognize the individual domains and vary the content accordingly. I do not know all of the domains that we will be serving, as clients can add new domains to their site. The coding parts, I know how to do - when clients add a domain, there will be a corresponding entry into our database and that will act as a key to control which set of content is shown.
The thing is, I suck at system administration. The server already hosts a dozen different sites unrelated to this webapp, so it's not a situation where every domain that hits our server's IP can go through the code I describe above. If I knew the domains ahead of time, I could simply point them to our server's IP and then create bindings in IIS to handle each. But since I do not know the domains ahead of time, I'm rather at a loss. What can I do to enable my IIS7 server to support this situation?
After looking around a bit, I have found a few options for this.
1) Building It Into The Code
Probably the best option is to programmatically create bindings in IIS6 and in IIS7. This way everything is integrated into the webapp, meaning there's no muss or fuss outside of the app. It requires a bit more work in the app itself, but the benefits of keeping things clean and keeping all the functionality around this action inside the single codebase are almost definitely worth it.
2) PowerShell
Another option is to set up a script for powershell to have it handle this stuff based on the script detecting changes to the database. This would work well also, but has the drawback of creating two codebases to maintain.
3) Remove Domain Bindings
This answer led me to try removing the existing domain from the webapp's bindings in IIS. Making this change resulted in being able to reach my webapp by just visiting the IP address (so the binding was no longer an issue). And the one domain we have set for this webapp so far still reached the desired site as well. So it seems that the solution could be as simple as to have no host/domain listed in the bindings on IIS. As long as only one site does this, all traffic that does not match another binding loads that site. A big upside here is that it takes less time/effort than any of the coding solutions mentioned above. The downside is that you can only have one site on the server perform this way, and you can no longer have the server locked to only serving content with recognized domains.
Is it possible to add a extra ip address to the server?
This way you could let the IIS process all request on this IP address and run your logic for these request only. leaving the existing websites untouched.
I have an ASP.NET web site that responds with multiple skins depending on the domain that it is accessed via.
The problem is that authentication and some other features seem to suffer random glitches where the user is sent back to the log in screen, or other session controlled values appear to have been lost - but only when accessed via one of the domains. The other domain does not suffer the same issue.
On our test system, the issues DOES NOT exist when accessing via any domain. On live, the issue will happen at varying times during the session, even with identical steps followed. It is for these reasons that I don't think it is a bug in the application software.
On the live system, where the issue is, two websites are set up in IIS, each with bindings to the required domain. One accesses the site through a virtual directory at http://mysite.com/myvirtualdir, the other accesses the site at the root path at http://myalternatesite.com/. I don't think that the virtual directory is the issue however.
I've now solved my problem, though still not sure what the exact cause was.
I opened up website properties for the two websites in IIS, the one that worked and the one that didn't and compared properties.
For anyone else trouble shooting this issue, these are the steps that I took, in order of how likely I think they were to be the cause of the issue.
Second website was using Default app pool. There is nothing particular about the Default app pool settings on this server that would cause session to be lost from what I can see, but I have now changed to use the same app pool as the site that was working all along.
Disabled windows authentication to match the working website.
Changed default documents so that only the required document was listed.
Limited connections to 500 to match the working website.
Hope this is of use to somebody else.
I have a flex app that is hosted on my server. It runs off an amfphp + mysql stack.
I have had inquiries from potential clients who want to "white label" the product. Part of this means that they would want it to appear that the app is running off their server. So their clients would login at www.theirsite.com instead of www.mysite.com.
I obviously dont want to give them the actual app...but are there ways of letting their server redirect to mine without the user actually knowing?
I don't have extensive experience with Flex, but I've worked a lot with Flash. Several ideas come to mind:
Create a wrapper: a SWF that loads your app. With the proper security settings, it should work. Check out this article on crossdomain.xml and the specs for Security.allowDomain()
You could simply embed the SWF from your site, similar to a YouTube video
If that is not satisfactory and your hosting services allow it, you can create a DNS A record or a DNS C record for a subdomain in there site, but you would support all the traffic that page has.
Probably the easiest way would be for them to alter their DNS record to make a CNAME entry (alias) that is a subdomain of theirsite.com:
yourapp.theirsite.com => www.mysite.com
Often, out of sheer desperation I will end up enabling "Everyone" access on a folder that a web app is accessing (perhaps for file creation, reading, etc) because I can't figure which user account to enable access on.
Obviously, this is a very bad thing to do.
Is there a way to determine what account IIS is using at that exact moment to access folders (and perhaps other resources like SQL Server, etc)?
Are there logs I can look at that will tell me? Or perhaps some other way?
I usually use Windows Auth without impersonation. Not sure if that information is relevant.
Another more general approach would be to use a tool like Process Monitor and add a path filter for anything that starts with the root of the website (ie c:\inetpub\wwwroot). You then have to add the Username as a column by right clicking on the column headers, but once you do that the w3wp.exe process should show up in whenever you try to access the website and it will show which user account is being used. This technique should work with all file access permission issues.
If you don't use Impersonation, application pool identity is used in most cases, but accessing SQL Server and UNC files are slightly different.
This MSDN article has all information in one place, but you really need to spare a lot of time on it in order to digest every details,
http://msdn.microsoft.com/en-us/library/ms998351.aspx
Use Sysinternals Process Monitor to see what is actually happening.
http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx
I am trying to get a grasp on how to handle updates to a live, functioning ASP.NET (2.0 or greater) Application while there are users on the site.
For example, suppose SO is an ASP.NET Web Application project. The project code compiles down to the single .DLL in the BIN folder. Now, there are constantly users on SO, so what would happen to users' actions/sessions if you would use the Visual Studio .NET "Publish" feature (or just FTP everything again manually) while they are using the site?
Would creating an ASP.NET Web Site, instead, alleviate any problems that may or may not exist with the scenario above? I am beginning to develop a web site as a user-driven Web Application, and I want to make sure that my inexperience with this would not potentially annoy the [potentially] many users that I [want to] have 24/7.
EDIT: Sorry, I should have put this in a more exact context. Assume that this site is being hosted by a web hosting service with monthly fees. I won't be managing the server itself, just what the web host allows as a user of their services.
I create two Web sites in IIS. One is the production Web site, and the other is a static Web site with an HttpHandler that sends all requests to a single static "We're updating" HTML page served with an HTTP 503 Service Unavailable. Typically the update Web site is turned off. When it's time to update, we stop the production Web site, start the update Web site, and now we can fiddle with the production Web site all we want without worrying about DLLs being locked or worker processes needing to be spun down.
I started doing this because
App_Offline.htm really does not work well in Web Gardens, which we use.
App_Offline.htm serves its page as 404, which is bad if you're down for a meaningful period of time.
We can start the upgraded production Web site with modified settings (only listening on localhost), where we can do a last-minute acceptance/verification that everything is working before we flip the switch, turning off the update Web site and re-enabling the production Web site.
Things this does not solve include
Any maintenance that requires a restart of the server--you still have downtime where no page is served.
Any maintenance that diddles with the .NET runtime, like upgrading to the latest service pack.
Other approaches I've seen include
Having two servers. Send all load balancing requests to one server, upgrade the other one; then rinse and repeat. Most of us don't have this luxury.
Creating multiple bin directories, like bin-1.0.0.0 and bin-1.1.0.0 and telling ASP.NET which bin directory to use in the web.config file. (One advantage of this is that reverting to a previous binary is just editing a config file. A disadvantage is that it's harder to revert resources that don't end up in your binaries, like templates and images and such.) I don't remember how this actually worked--I think the application did some late assembly loading in its Global.asax based on its own web.config section (since you touched the web.config, the app had restarted, so it was okay).
If you find a better way, let me know!
Changing to the asp.net web site model won't have any effect, as the recycle will also happen, some of changes that trigger it for sure: web.config, global.asax, app_code.
After the recycle, user will still be logged in because asp.net will just validate the syntax. That is given you use a fixed machine key, otherwise it will change on each recycle. This is something you want to do anyway as other stuff can break if the key change across requests i.e. viewstate validation, embedded resources (decryption of the url fails).
If you can put the session out of process, like in sql server, you will avoid loosing the session. If you can't, your code will have to consider that. There are plenty of scenarios where you can avoid using session, and others were you can wrap it and re-retrieve the info if the session was cleaned. This should leave you with a handful specific cases that you know can give trouble to the users, so for those you do some of the suggestions others have already made.
One solution could be to deploy your application into a load balanced environment (web farm).
When deploying a new version you would use the load balancer to redirect requests to the server you are not deploying to.
App_offline.htm is great solution for this I think.
in SO we see application currently unavailable page when a deployment begins.
I am not sure how SO handles it.. But we usually put a holding page. So what ever the user has done (adding question or answering questions) does not get updated. As soon as he updates something he will see a holding page asking him to try after sometime.
And if I am the user I usually press the back button to make sure what I entered is saved in the browser history so that I can post later.
Some site use use are in clustered environment so I take one server offline and inform the load balancer that she will not be available and once I make sure that the new version is working fine I make it live.. I do the same thing for the next server.
Do we have any other option?
It is not a technical solution, but set up a scheduled maintenance window. You can annoucement in advance giving your user base fair warning that there is a possiblity that the application will not be available during that time frame.