In WebsiteAzure we have an Staging feature. So we can deploy to one staging site, test it, fill all caches and then switch production with stage.
Now how could I do this on a normal Windowsserver with IIS ?
Possible Solution
One stragey i was thinking about is having a script which copies content from one folder to an other.
But there can be file locks. Also as this is not transactional there is some time a kind of invalid state in the websites.
First Poblem:
I've an external loadbalancer but it is externally hosted and unfortunately currently not able to handle this scenario.
Second problem As I want my scripts to always deploy to the staging i want to have a fix name in IIS for the staging site which i'm using in the buildserver scripts. So I would also have to rename the sites.
Third Problem The Sites are synced between multiple servers for loadbalancing. Now when i would rebuild bindings on a site ( to have consistent staging server) i could get some timing issues because not all Servers are set to the same folder.
Are there any extensions / best practices on how to do that?
You have multiple servers so you are running a distributed system. It is impossible by principle to have an atomic release of the latest code version. Even if you made the load-balancer atomically direct traffic to new sites some old requests are still in flight. You need to be able to run both code versions at the same time for a small amount of time. This capability is a requirement for your application. It is also handy to be able to roll back bad versions.
Given that requirement you can implement it like this:
Create a staging site in IIS.
Warm it up.
Swap bindings and site names on all servers. This does not need to be atomic because as I explained it is impossible to have this be atomic.
as explained via Skype, you might like to have a look at "reverse proxy iis". The following article looks actually very promising
http://weblogs.asp.net/owscott/creating-a-reverse-proxy-with-url-rewrite-for-iis
This way you might set up a public facing "frontend" website which can be easily switched between two (or more) private/protected sites - even they might reside on the same machine. Furthermore, this would also allow you to actually have two public facing URLs that are simply swapped depending on your requirements and deployment.
Just an idea... i haven't tested it in this scenario but I'm running a reverse proxy on Apache publicly and serving a private IIS website through VPN as content.
Related
We have a non-standard Kentico architecture which Kentico have advised is supported as long as synchronization of physical files between load balanced servers is disabled and handled manually. What is the correct way to manually synchronize web farm server files? I wondered about using a tool like DirSync but assume this would require one server to act as the primary, whereas with Kentico a new media file, for example, may be initially saved to any of the physical servers.
I'm hoping to identify a definitive solution to this issue. Thanks.
Kentico web farm by default synchronizes physical files automatically if the web farm is working properly. As each request can be served by different server Kentico serializes file binary into Database which is shared by all servers and then re-creates file on the server where it is missing.
I'm not aware of any situation where web farms are supported, but file synchronization isn't. It's either all or nothing, there is no middle solution.
Can you be more specific of why the synchronization of physical files is not working on your end? As long as all servers see the database (which they should otherwise the WF is not working at all) the file synchronization will work.
PS: If your files are not synchronized, go to Web farm -> Tasks application and check how many tasks are there. If there are no tasks (or very few which are being deleted constantly) then your web farms are working, if there are tasks older then few minutes then your web farms are not working at all.
I read the thread above and would recommend you take a look at this tool from BizStream: https://devnet.kentico.com/marketplace/modules/compare-for-kentico
I haven't gotten to play with it myself, but they are a top notch shop so I can bet its a top notch product.
Otherwise you are going to have to go the custom sync code.
We've tried to do moves via the SQL Tables and it is 'possible', but the amount of interconnected relationships just make it quite unrealistic to build or support.
We are migrating from WebSphere BPM 8.0.1.3 to 8.5.6, our plan is to move application by application rather than in a big-bang. The idea would be that when we move an application to the new server, we would create an IHS rule which redirects the related URLs to the new server. That would mean that we keep some applications running on the old server while some are already migrated to the new one.
Is this possible to achieve? Or any other idea alternate to re writing IHS rules? Like make use of WebServer plugins?
Unfortunately, I don't think that your current approach is going to work well for you. I've outlined the various options for IBM BPM upgrades here. I see several major problems with your approach, all of which come down to the fact that many of the URLs used by IBM BPM contain no details about the context for the request.
The first issue I see IBM uses a portal for a given user's work. That is all their tasks across the various BPM solutions will appear in the same web UI. This URL is not different across the Process Applications in the install. This means that all your users are trying to get their task list by going to a url like - https://mybpmserver/portal. There isn't a way to understand the process app a given user may be working with in this context, so you don't know who to redirect to the new server.
The second issue is that users are able to work with multiple process apps, so even if the context was known in the above url, you would enter complexities with respect to users working in 2 different process apps unless both have been migrated.
The third issue is that BPM is essentially a state engine. IBM does not supply a way to "migrate" that state from an old install to a new install on a per Process App (PA) basis, you have to migrate all or none. Assuming "none" because it feels like you want to follow the drain approach in my article, then you have the problem that the URLs for executing a task do not have the PA context and therefore you won't know which server to direct which task to. That is for a given PA you will have tasks on both the old server which existed before the upgrade, and the new server which were created after the upgrade, but the URLs for these tasks will look essentially the same.
There are additional issues, but the main one comes down to properly understanding how the run time BPM engines work. Some of the above issues may be mitigated if you have a separate UI layer for presenting the tasks the users (my company make a portal replacement that can do this) which would permit it to understand the context of the tasks, but if you have this, then you can get the correct behavior in that code and not worry about WAS configuration settings.
You could use the plugin-cfg.xml merge tool on the two generated plugin-cfg.xml's. That way the WAS Plugin would always know which server had which applications.
Ok, so here's the thing.
I'm developing an existing (it started being an ASP classic app, so you can imagine :P) web application under ASP.NET 4.0 and SQLServer 2005. We are 4 developers using local instances of SQL Server 2005 Express, having the source-code and the Visual Studio database project
This webapp has several "universes" (that's how we call it). Every universe has its own database (currently on the same server) but they all share the same schema (tables, sprocs, etc) and the same source/site code.
So manually deploying is really annoying, because I have to deploy the source code and then run the sql scripts manually on each database. I know that manual deploying can cause problems, so I'm looking for a way of automating it.
We've recently created a Visual Studio Database Project to manage the schema and generate the diff-schema scripts with different targets.
I don't have idea how to put the pieces together
I would like to:
Have a way to make a "sync" deploy to a target server (thanksfully I have full RDC access to the servers so I can install things if required). With "sync" deploy I mean that I don't want to fully deploy the whole application, because it has lots of files and I just want to deploy those new or changed.
Generate diff-sql update scripts for every database target and combine it to just 1 script. For this I should have some list of the databases names somewhere.
Copy the site files and executing the generated sql script in an easy and automated way.
I've read about MSBuild, MS WebDeploy, NAnt, etc. But I don't really know where to start and I really want to get rid of this manual deploy.
If there is a better and easier way of doing it than what I enumerated, I'll be pleased to read your option.
I know this is not a very specific question but I've googled a lot about it and it seems I cannot figure out how to do it. I've never used any automation tool to deploy.
Any help will be really appreciated,
Thank you all,
Regards
Have you heard of the term Multi-Tenancy? It might be worth look that up to see if that applied to your "Multiverse" especially if one universe is never accessed by another...
See:
http://en.wikipedia.org/wiki/Multitenancy
http://msdn.microsoft.com/en-us/library/aa479086.aspx
UPDATE:
If the application and database is the same for each client (or Tenant) I believe there are applications that may help in providing the same code/db as an SaaS application? ie another application/configuration layer on top that can handle the deployments etc?
I think these are called Platform as a Service (PaaS) applications:
see: http://en.wikipedia.org/wiki/Platform_as_a_service
Multi-Tenancy in your case may be possible, depending on client security requirements, with a bit of work (or a lot of work):
Option 1:
You could use the one instance of the application, ie deploy the site once and connect to a different database for each client. You would need to differentiate each client by URL to isolate content/data byt setting a connection string for each etc. (This would reduce your site deployments to one deployment)
Option 2:
You could create both a single instance of the application and use a single database. You would need to add a "TenantID" to each table and adjust all your code to accept a TenantID to ensure data security/isolation. Again you wold need to detect/differentiate the Tenant based on the URL to set the TenantID for the session used for every database call. (This would reduce your site and database deployment to one of each)
Our company currently runs two Windows 2003 servers (a web server & a MSSQL 8 database server). We're planning to add another couple of servers for redundancy / availability purposes in a web farm setup. Our web sites are predominately ASP.NET, we do have a few PHP sites, but these are mainly static with no DB.
Does anyone who has been through this process have any gotchas or other points I should be aware of? And would using Windows Server 2008 offer any additional advantages for this situation (so I can convince my boss to upgrade :) ?
Thanks.
If you have dynamic load balancing (i.e. My first request goes to server X, but my next Request may go to server Y or Z), you will find out that In-Proc Sessions do not work. So you will either need sticky Sessions (your load balancer will ALWAYS send me (=my session) to server X) or out-of-process sessions (i.e. stored in an SQL Server).
Like Michael says, you'll need to take care of your session. Ideally make it lean and store out of process. You'll have similar challenge with cache depending on how you use it and might be interested in looking towards a more robust caching technology if you only use asp caching.
Don't forget things like machine keys and validation in your web.config. The machineKeys need to be consistant across your servers.
Read up on IIS7 and you should be able to pick out several good examples to show off to your boss.
A web farm can give you opportunities and challenges with deployment that should not be overlooked.
Without specifc experience to the setup above but to general moves of this kind. I would recommend phasing the approach. That is, move to Windows 2008 first and then farm.
One additional thing to look at is your deployment plan. Deployment plans seem to be sadly overlooked and/or undervalued. Remember that you are deploying to multiple nodes and you want to take into account how you want to deploy and test in a logical fashion.
For example, assume you have four nodes in your farm. Do you pull two out of the cluster and update and test, then swapping out the other two to repeat? Determine if your current deployment process fits in with the answer you provide. Just because you have X times the amount of servers does not mean that you want or need to do X times the amount of work.
Just revisiting the caching part of the conversation for a moment. You should definitely take a look at a distributed caching solution. If you are pre-caching data and using callbacks with cache removals, you can really put a pounding on the database if you are not careful. Also, a lot of the distributed caching solutions offer some level of session state management, as well. I have been very much enjoying Microsoft's Velocity project, although it is just a second CTP release and not ready for production.
In addition to what others have said, you might want to consider looking into Richard Campbell's (of .NET Rocks!) product:
http://www.strangeloopnetworks.com/
We use the ASP.NET State Server for handling out sessions. This comes free with windows server 2003/2008.
We then have to make sure the machine key's are the same (a setting in your web.config files).
I then manually take each site offline (using app.offline or whatever the magic file is called). Alternatively, u can use IIS and just turn the site off and the offline site 'on'.
That's about it. You could worry about distributed caching, but that's pretty hard-core stuff. You can get a lot of good millage out of the default Output Caching with ASP.NET. I'd start there, before you delve into the complexity (and cost, for some products) if you're going to do distributed caching.
Oh, we're using an F5 load balancer that does NOT do sticky sessions, so we need to maintain our sessions .. which is why we're using the ASP.NET state server.
One other gotcha aside from the Session issues described by the other posters is if the apps are writing to the local file system. Scaling out to a web farm would break the apps if they assume the files are on the local PC. For example, uploaded files might be available or not depending on which server is hit. Changing the paths to point to a shared drive should fix this.
We have a situation where users are allowed to upload content, and then separately make some changes, then submit a form based on those changes.
This works fine in a single-server, non-failover environment, however we would like some sort of solution for sharing the files between servers that supports failover.
Has anyone run into this in the past? And what kind of solutions were you able to develop? Obviously persisting to the database is one option, but we'd prefer to avoid that.
At a former job we had a cluster of web servers with an F5 load balancer in front of them. We had a very similar problem in that our applications allowed users to upload content which might include photo's and such. These were legacy applications and we did not want to edit them to use a database and a SAN solution was too expensive for our situation.
We ended up using a file replication service on the two clustered servers. This ran as a service on both machines using an account that had network access to paths on the opposite server. When a file was uploaded, this backend service sync'd the data in the file system folders making it available to be served from either web server.
Two of the products we reviewed were ViceVersa and PeerSync. I think we ended up using PeerSync.
In our scenario, we have a separate file server that both of our front end app servers write to, that way you either server has access to the same sets of files.
The best solution for this is usually to provide the shared area on some form of SAN, which will be accessible from all servers and contain failover.
This also has the benefit that you don't have to provide sticky load balancing, the upload can be handled by one server, and the edit by another.
A shared SAN with failover is a great solution with a great (high) cost. Are there any similar solutions with failover at a reasonable cost? Perhaps something like DRBD for windows?
The problem with a simple shared filesystem is the lack of redundancy (what if the fileserver goes down)?