I uninstalled the Visual Studio Tools for Git so I could reinstall a later version (it was doing strange things like leaving merge markers in files).
Since then, in a mixed ASP.NET WebForms and MVC 4 project, I can no longer visit MVC routed URLS. I get;
HTTP Error 403.14
Forbidden The Web server is configured to not list the contents of this directory.
I've also made some code changes, but I rolled back all these changes and the problem is still there. I then deployed to another environment and all is well.
I will report back with what I find. Pretty bad experience so far.
So I push all changes up and then cloned to a new folder. It works in the new folder!
I make a branch in the new folder and delete everything, copy-in the broken folder contents and see what Git thinks has changed, see what the difference is that breaks the site.
All sorts of differences found, all of them whitespace, CRLF issues with Git. I love Git.
I checkout a long previous version. That doesn't work.
I open a another MVC4 project, that works! So MVC and IISX 8 are able to serve pages, just not from this project (even from an old commit that was working).
This is serious problem. It worked for a moment and then not.
Update
So I clone again into another new folder. It works. Let's see how long this lasts.
Update
I check out my changes branch and it still works. The last time I broke it, it was after copying the broken contents in from the other repo folder, Git told me that the modifications were all whitespace.
So whitespace or Linux LF characters caused a 403 and ruined my Friday.
Related
I've created an online asp.net app using VS2010 with VB. It has been online and working for over a year. It's on our company's public web server but it's only for employee use. For as long as the site has been active and in use, I have had to made periodic changes to improve performance or to fix bugs; no problem until now. Suddenly, though, VS is throwing an error during the publishing process:
Unable to add 'app_offline.htm' to the Web site. The file 'app_offline.htm' already exists in this Web site.
The IDE is set up to create the site files in a local directory on my workstation machine, and then I copy the files to the server.
The local directory does not contain the app_offline.htm file, so I don't get why the error is saying that it already exists.
It appears that even with this error, all of the files are being properly created, and the site works fine after I copy them to the server, so this is apparently not a fatal error. But it's still unnerving to me to not know why it's doing that, and why just now when it has been fine for over a year.
I would either like to find what's causing the problem and fix it or to find a way to inhibit the process of creating the app_offline.htm file. Anyone have any ideas? No search has turned up any helpful information.
Thanks!
Have you tried removing the file in question from the project, publishing again, then adding it back and publishing?
You can right click on the file(s) and choose "Exclude from project..."
I am new to web development and am unfamiliar with some of the methods for best testing the web end information. On my current project I am using Weblogic to deploy my JSP project through Eclipse.
The webpages build fine and everything works, but currently I am restarting my localhost server every time I make a change to the css or the jsp, which a single round trip can be up to 3-5 min. is there a more efficient way of testing the css other than restarting my server every time? What about jsp?
Edit: Followup:
For any that stumble past this: After reading up on different staging methods as #better_use_mkstemp suggested, I couldn't figure out how to explicitly set eclipse up to stage or nostage deploy, however I did find some nifty things.
You can do an Incremental push (expand drop down in the servers view in eclipse, right click project that is deployed on a running server > click "Incremental Publish". This only pushes what has changed since last push) which helps a bit.
The other thing I found was an automatic deploy when a file changes, but that had it pushing while I was still editing the file which was a bit annoying. Using this feature you can also have it auto deploy each time there is a new version, but I didn't play around with that since we're not changing the version number yet.
Check how you are deploying your application. If you deploy using nostage mode, the server(s) will keep looking in the original deployment directory and will automatically detect jsp changes for refreshes.
This is as opposed to stage mode, where the original war/ear deployment is actually copied to each individual server.
Read more here: http://docs.oracle.com/cd/E11035_01/wls100/deployment/deploy.html#wp1027044
Sounds like if you keeps dropping your updated deployment files in the same directory with nostage you won't need a restart.
We have an asp.net web site that is deployed on several IIS servers. The site is compile-on-demand as opposed to a pre-compiled web application.
Normally deployments go fine but every now and again we get a 401 for one of the deployed pages on one of the servers. There is nothing special about which page or which server apart from the fact that it's generally the higher traffic pages that it happens to.
The only way to rectify this is to deploy the same page again.
The ACLs look fine on the files themselves so the thought is that there is a file locking issue in the Temporary ASP.NET Files folder when the specific page is re-compiled.
Has anyone seen this before or have any suggestions how to avoid this?
Note: This only seems to have happened since we moved to .net 4.0
As far as I can tell we are getting a 401.3 Denied by resource ACL http://support.microsoft.com/kb/907273
But I have not been able to confirm this.
Those kinds of locks have always been a problem with live site deployment. The reason it's hard to replicate is because you are mid-request when copying/compiling on the server, and this ends up confusing IIS.
We operate a Blue/Green deployment strategy on a 4 tier architecture which has a web site over 4 servers at the top tier. Due to the complexity the architecture introduced for deployments, we needed a way to deploy without disturbing any traffic to the "live" site. Following Fowler's advice, but not quite in the same way, we came up with a solution that means we have 2 sites on each server (a blue and a green, or in our case site A and site B). The live site has the appropriate host header, and once we have deployed and tested to the non-live site, we then flip the headers of the 2 sites so that what was once live is now the non-live site, and vice-versa. The effect is, a robust deployment that can be done in hours and with the highest level of confidence.
This of course complicates your configuration and deployment slightly, but it's worth the effort. I guess it kind of goes without saying that you want to script both the deployment, and the host header swapping.
When i deploy to a server i bring the site down for a minute (or however long the deployment takes) - it may be down anyway during this time as pages are recompiled so it is not too much of a hit. You can do this by creating a file in the root of the app called app_offline.aspx (it needs at least 512 characters in length) once that file is created you can then copy the resources ot the folder knowing there will not be any locking issues. then when the copy is complete remove the app_offline file.
For those that want to achieve a .net website deployment without these issues, one option is to copy the new website files into a new folder first ( not the active website). Then you just change IIS to point to the new folder after all copying is complete.
This can be done in a single server environment for those of us on more limited resources without multiple servers per website.
At my work we write power shell scripts to deploy websites. The powers shell script creates a new directory with a time stamp, copies the new deployment there, then tells IIS to point the website to the new directory (leaving the old directories "orphaned" but still there).
If we really messed something up, we can simply revert by pointing IIS back at the previous date stamp directory. Otherwise if everything tests ok, we can delete the old folder.
This technique works well because you are never writing over a file while it is in use. However it still results in zero downtime. The only effect you will see is the normal .net "warm up" that occurs anytime you change the code behinds or assemblies.
I had several answers suggesting a new environment to deploy. This is something we have been considering for the long term but it's hard to justify the extra work when we regularly deploy only one or two files without a problem. I was really more interested in finding out what is actually happening and why.
In terms of a workaround, and this might sound obvious after the fact, a simple app_pool recycle solves the permissions issue and is much easier than testing for the issue and redeploying the file until the problem goes away.
Maybe i'm totally outdated but for last four years i've been using simple FTP upload feature while uploading new website even without building it within Visual Studio. Just bunch of ASPX and CS files as in Visual Studio.
I do understand that compiling the project will provide me with some security defence so ones who have access to the server won't be able to read those files in text editors and i will avoid first time compilation but is that so important?
I mean, you can always do a lot of harm if you have access to server that just reading CS files instead of DLL.
First time compilation usually takes no more than 1 minute just searching for compiled version of the site will take as much time.
Now i'm watching video on PluralSight which explains new MSDeploy tool available from ASP.NET and i can't see any good reason to use it.
So what's wrong with the old fashioned way of just sending files via FTP without compiling or using fancy tools?
I did speed test and with MSDeploy i can deploy a website twice faster than old-fashioned FTPing. So instead of 4 minutes it will take 2.
Now from another perspective, when i already have alive project on the web. In which have to change Default.aspx because i have typo in some html tag. Deployment via MSDeploy will take 10 times more than uploading one file
Maybe i miss something?
MSDeploy does things which FTPing to a site can't do. Need to change a machine.config? You're unlikely to have FTP write access to the folder which contains it. Want to change a server setting in a server-version-independent manner? FTP won't do that. Etc. FTP works fine for copying files to folders in which you have write access, but that's all it can do.
When you deploy a project you can do a lot of things with it.
You can set up a job in your deploy that packages all your javascript into one file and all your css into one file.
You can set up a job in your deployment that changes a bunch of config settings to match your production server settings (rather then development settings).
The idea of deployment is that you take your current development website and transform it into a production website without having to do any of that manually.
The most important thing is that when you can only deploy your website you will never forget to package your js or forget to remove some debugging code because you can't just sneakly update a single file.
I'm trying desperately to move from VSS to a real source control system. Options include TFS and SVN.
My designers need to keep their ability to modify source files and instantly preview their changes in a browser without having to commit their changes. Using FPSE with VSS, this works flawlessly, since saving a file causes the copy in the working folder on the dev server to be updated, so they can just save and refresh their browser which is pointed at the dev server.
The site in question consists of 350k+ lines of classic ASP code and some new ASP.NET MVC. They only need to be able to modify views within the MVC code, not C#.
Though Expression includes a version of Cassini for local debugging, Cassini does not support classic ASP.
Surely someone has solved this problem before. It can't be necessary to install IIS on each designer's machine (this is absolutely untenable). I need a way to have a common working folder on a dev webserver updated whenever someone saves a file locally, just like using FPSE.
I'd rather not write an FPSE proxy that knows how to talk to TFS/SVN. Any suggestions?
(I know I've asked this question in the past, but I haven't yet found a solution.)
Why the need to copy the source files when they are saved, why not simply save the files to a network share and work on them directly? If the dev server is constantly being overwritten after every save anyway surely the effect is the same?
This probably won't be as instantaneous as you like, but with TFS you could set up a Continuous Integration (CI) build that builds and deploys the project to a test server on check-in. If you do this, you'll want them checking in to a QA type branch, then, once they are happy with how they look, they can then merge to the mainline branch for the real build and integration.