We're working on an Umbraco site - multiple development machines using a shared development database.
When one developers makes changes in the CMS to content and does a Save and Publish the change is reflected on his machine but not other development machines.
This doesn't seem to make sense as we're all looking at the same database.? We've tried doing an IIS reset to see if it's caching at work but this doesn't seem to make a difference either.
Any ideas what on earth could be going on?
Umbraco does a lot of caching, so it doesn't have to hit the database all the time. Normally, all of the published content is cached in an xml file at App_Data\umbraco.config. You just need to have your developers right click on the root of the content tree in the umbraco backoffice and click "Republish the entire site" to regenerate that xml cache on disk from the xml cache in the database.
You also might need to reindex your examine indexes. You can normally find the "Examine Management" dashboard on the developer section in the backoffice of umbraco. By default, there are three indexes: InternalMember, Internal, and External. Unless you have membership going on in your umbraco site, you can ignore that index. The External index is used mostly for site searches. The Internal index is much more critical. It is used to cache media. I believe it is also used in the backoffice, but I'm not 100% certain. Make sure that the Internal index is regenerated.
Remember that media files are stored in the /media directory by default. That means if developer 'A' uploads a file, the physical file won't show up on developer 'B's machine automatically.
I'll bet you there's some cool ways to set up load balancing to handle a caching for your dev setup. I'm pretty sure there are also ways to store the media in the database, so you don't have to worry about transferring them back and forth.
Related
I currently develop Drupal web sites using its multi-site feature that allows me to have a single code base and support multiple distinct settings per each site.
I set up a dev server and I was quite happy with my arrangement of domains like example.com.local (not that happy because I had to perform a small conversion before entering production, but still quite happy) and the thing used to work well. Too bad I recently started to work at places outside the LAN in which my dev server resides--mostly at clients' places where I need to demo their sites. First of all I set up a dyndns.org account and the server is accessible through the Internet.
Unfortunately the whole domain-based multi-site ungracefully fell down, since I'm now accessing the server via myservername.dyndns.org and Drupal's algorithm takes the domain name into account, so I'm forced to use at least the TLD as part of the directory name (namely sites/local.example.com). So I decided to switch to directory-based multi-site, and now I'm able to access my server from inside the LAN using myservername.local/example.com (having renamed the sites/ subdirectories accordingly). You should easily see why this is suboptimal, since when I browse to myservername.dyndns.org/example.com Drupal looks for sites/org.example.com. I temporarily ended up making a link from sites/org.example.com to sites/local.example.com but again, this does not scale well If and when I'll have to drop dyndns.org for, say, dev.mycorporatesite.com...
Is there any other possibility? I have full access to the server, I can change Apache2's configs, .htaccess and all the stuff.
I would recommend against referencing drupal multisites in folders but instead would set up your server to have a fixed domain name and each site in a subdomain.
So your dev server is at mydevserver.com
and then each site could be
client1.mydevserver.com
client2.mydevserver.com
etc.
If you also at the same time as creating these, you move the files folder from the default to whatever the live site will be i.e.
sites/livesite.com/files
Then when you have to go live, all the references will be correct (if you are drupal 7 this might not be an issue)
I usually place an app_offline.htm in my root directory when I am releasing a website to a production environment. However sometimes if there has been a few big changes to the site, I would like to click around first to make sure it's stable without allowing access to anyone other than me.
As far as I am aware this isn't possible, but I'm hoping someone has a neat solution...
The solution has to include if someone has a deeplink into the site, so using a default.htm/asp page in the root won't do the trick unfortunately.
I agree with the staging environment answer above, but otherwise here's one possible approach: Temporarily block all IP addresses besides your own. This can be achieved through IIS Directory Security configuration, or programmatically in any number of ways
You can redirect all the non-authorized users to an Under Construction page of some sort. Meanwhile, you can happily browse the site from your IP. When the site is vetted, you remove that IP restriction and the site becomes available to the world at large.
It's a difficult thing to achieve. That's why you should have a staging environment where everything should be validated before shipping into production. Then during the deployment process (if it takes long, but it shouldn't) you could use an App_Offline file. This staging environment should be as close as possible to your production environment (in terms of software, patches and configurations installed, not in terms of hardware power of course).
Another quick suggestion that would allow you to control things from the web.config might include a custom module that redirected all requests to a static page except those defined by a filter (i.e. hostname, url sniffing) that could be configured via the web.config.
I have an ASP.NET site that I'm going to have to take down to make some major structural updates to, and I was wondering how I should go about it from the client-side perspective. I have heard of an App_Offline.htm file or something like that, but I've never really gotten that to successfully work. Does anyone know how to do this?
EDIT
My app is running ASP.NET 4.0, for what it is worth.
Rather than messing with the app_offline silliness (among other reasons, you can't continue to see the site internally while performing maintenance), I created an additional "down for maintenance" site in IIS, which is normally stopped. It has the same IP, host headers, etc as the main site, but only has a default.aspx, an images folder and a stylesheet. That file contains the "This site will be down for maintenance until xx:xx PM CST" message.
When I'm ready to perform the update, I stop the main site and start the maintenance site, which then processes any requests it receives and, of course, returns the maintenance message.
When the maintenance is complete, stop the maintenance placeholder site, and restart your main site.
If you're using host headers, you can modify this approach so that the site remains internally accessible over your LAN/WAN while the maintenance site is handling external requests. A simple approach is to remove the host headers for <*>.yourdomain.com from the main site before starting the maintenance site, and ensure that the main site has an additional host header that is internally accessible (added to your local hosts file, for instance). When you start the maintenance site, it'll handle external requests while the primary site will handle requests to the internal-only header.
Alternatively (this seems complex, but saves you the trouble of adding and removing headers), create three sites:
Main site: Configured as in normal operation.
Maintenance site: Has same IP, host headers, etc as main site, but only contains default "down for maintenance" page and any images, css, etc that are required.
Internal test site: Duplicates the configuration of the main site and points to the same folders, but only has host headers,etc for an internal name that is not in the public DNS.
This way, you have only to stop the main site and start the other two in order to funnel external traffic to the "down for maintenance" site, while you can still see and tweak the primary site. This is helpful for that last few minutes of testing/bug fixing that tends to come up during a deployment.
Update
If you don't have access to the server or IIS Manager, you most likely won't be able to use any of that. Assuming that your access is limited to your own folder, your options seem to be either to deploy app_offline.htm to the root of the site (ASP.NET checks for that filename), or to just replace the whole site with a "down for maintenance" app. Perhaps someone else will chime in with alternatives.
The trick for IE is to push over the wire particular count of bytes otherwise IE shows not so friendly 404 error anyway. here is more details: http://weblogs.asp.net/scottgu/archive/2006/04/09/442332.aspx
If you have a good pre-release testing process and careful release procedures you probably won't be down for long.
In that case dropping a file called App_Offline.htm into your site root works fine. IIS will replace your site with it until you remove it. It's a painless way of making sure nothing's updating while you transition.
Mine just has a header with the site logo and a message that we'll be down for maintenance for up to twenty minutes. That's it. It took me about five minutes to write IIRC.
I would definitely recommend this for short sharp down periods of less than half an hour. Anything longer and you're probably looking at a major system change that warrants an approach like David Lively's.
Back in the days before Visual Studio Web Server we would host our local dev inside IIS. If you have a workstation version of IIS that means only 1 website. What if you are working on several websites. Easy: create them in VDIRs, e.g. http://localhost/ProjectA, http://localhost/ProjectB.
Living in a VDIR doesn't sound so hard. Make sure all your images/CSS/links are relative paths, use the "~" a lot. Sounds like a good practise. Hardcoding images etc so they only work when the application is served from "/" sounds like a bad practise.
There are some nuances anywhere you have to build a link (mostly not common scenarios):
eg; PROD: http://prodserver.com/images/up.jpg -> DEV: http://localhost/ProjectA/images/up.jpg
links / images in emails
flash/javascript/silverlight requests data from the server which contains links to images
The full link to the a PayPal IPN (the page paypal POSTs their response too)
So.. Are you doing this? Advantages / disadvantages? Any other gotchas I've missed?
I always avoid hardcoded paths, URLs, etc. unless there's a specific reason to do otherwise. Things inevitably change, and there's always the jump from your dev site to production.
The part that is usually the biggest nuisance would be in reusable client behaviors that need to reference other paths, and themselves can be reused in pages across the application's directory structure.
I like the idea handler that responds to "globalvars.ashx" (or something like that; there are many ways to handle this) which dynamically emits (and allows caching) properties regarding the global application properties.
Say the handler responsible for globalvars.ashx writes the result of something like this:
String.Format("var ApplicationProperties = {{ RootPath:{0} }};", Request.ApplicationPath);
Your JS behaviors could theoretically reference that property object at any point via ApplicationProperties.RootPath.
In short, yes. The disadvantages of not doing so outweigh the benefits. I actually think your first two points can also be mostly mitigated from using app-relative paths ("~"), but nevertheless, some scenarios such as "integration-level" ones (like PayPal) may indeed prove tricky.
But at the end of the day, if you need to host your app in a virtual directory you are virtually guaranteed problems if you haven't coded your app to be vdir-friendly from the get go. I know I have.
Some background/context: My current production environment at work is almost always a virtual directory anyway, so I do this by necessity. And I've never had a problem when an application was created as a root-level website. This certainly wouldn't be the case if it was the other way around.
I'm building a static ASP.NET site (using Masterpages and a few forms) and I'm about to release it onto my production server.
I know about changing <compilation debug="true"> to false, but I'm wondering what other things I can do to obtain the highest speed possible. There is no data access in the site, it's all static content.
Does anyone have a checklist they run through or know of a good resource for setting up sites in a production environment, with a focus on performance?
Checklist so far (Feel free to edit this yourself with any worth additions)
Make sure <compilation debug="false" /> is actually set to false in Web.Config
Make sure <trace enabled="false" /> is actually set to false in Web.Config
Set necessary read/write/modify folder permissions for site
Enable GZIP in IIS (reduces size of pages/css/javascript dramatically)
Have you considered OutputCaching for any pages / controls?
Consider setting up Web Tests (Eg WatiN for .NET) to make sure functionality on your site is still working ok
Make sure it isn't Friday afternoon!
If you're writing any log or output files, make sure the proper folder permissions are setup in the production environment. Typically debug/test environments are much more lax on file read/write permissions than production.
Don't deploy on Friday afternoons! This is guaranteed to mess up your head for the weekend.
Also, don't forget to check the gzip settings in IIS. Compressing output will make things travel across the wire much faster.
There is actually a very good checklist on how to perform a security deployment review provided on MSDN.
if its all static content, you'll want to use aggressive Output Caching
If your site use a database and only presenting information, make the database read-only. That takes away all locking handling and speeds upp the access a great deal.
If you have a back-end that updates the data, make it a separate database and have sheduled periods that update the readonly database once a day or what is needed for that application.
If you just present news and other small things on a company web-site that not change so often then this solution is probably for you. Even if its a site with gigabytes of data.. The key word is, how often does we update the data?
From what I see in daily business,noone really thinks about this solution because everything has to be "real time", but there are plenty of cases where this would be a perfect solution.
Review your web.config
Check debug (web.config / *.svc), tracing, ...
Update debug to production values:
email addresses
(web)service addresses
location log files
quick search: link
You should have some sort of test to verify various functions of your site, and the permissions. For instance, once you publish. Walk through a checklist, can I access x if I don't have permission? Does x,y,z work on the application? I do this after every publish because small changes can have a big impact.
You should read this:
https://stackoverflow.com/questions/72394/what-should-a-developer-know-before-building-a-public-web-site
It's currently the 9th highest voted question on SO and in the top 3 most favorited. The caveat is that it's platform agnostic, so it's missing some ASP.Net-specific items.
Thoroughly test the site outside of your corporate firewall / proxy after clearing your browser cache. This will help to ensure that all resources are publicly accessible (and are not on a local server or cached). For instance, you might find that you have used absolute URLs to include, say, JavaScript or CSS files. These work fine in your development environment, but as soon as the site goes live they are inaccessible. Or you have a CSS file in your cache that has subsequently been deleted, but you don't notice.
Ensure that any products / applications you use that have keys that are tied to a domain will work on your live site. This includes things like Google Map keys or commercial 3rd party applications. It also includes automatically generated hyper-links sent out in, say, emails. You wouldn't want a user registration to have a link back to http://localhost/comfirm.aspx or the like, would you?