I have 2 different gpc project, one with all my running servers, and another running my firebase app is there a way to merge those 2 ?
At present there is no GCP options to merge projects, however I have created a feature request to provide this functionality in the future. There is not ETA but any progress on it can be followed at the above link.
As a workaround you can create VM disk snapshots images of your VM instances from one project and move them incrementally to the other. Once you are completely sure that the migrated VMs are working properly in the destination project, you can shut down the other project which you can still restore within 30 days in case you may need to do so.
I am not familiar with Firebase so you would have to take into account any issues with DNS resolution and domain issues with your application if you decide to move the app in that project instead.
Related
The Issue
I am currently in the process of integrating a pre-rendering service for SEO optimization, however we use an Azure App Service Plan to scale up or down when necessary.
One of the steps for setting up the proper configuration requires placing an applicationHost.xdt file in the /site/ directory, which is one level above the /site/wwwroot directory where the application itself gets deployed to.
What steps should I take in order to have the applicationHost.xdt file persist to new instances spawned by the scaling process?
Steps I have taken to solve the issue
So far I have been Googling a lot, but haven't succeeded in finding a lot of documentation on using an applicationHost.xdt file in combination with an Azure App Service Plan.
I am able to upload the file to an instance manually, however I have assumed that when we then scale up to more instances the manually uploaded file will not be present on the new instance(s).
Etcetera
We are using Prerender.io as pre-rendering service.
Should there be an easier to set-up & similarly priced service available, we would be open to suggestions as we are in an exploratory phase regarding pre-rendering.
Suppose this won't be a problem, cause all files under azure app are shared between all your instances. You could check it in this Kudu wiki:Persisted files. And in my test all instances will keep the file.
About upload the applicationHost.xdt, you don't have to do it manually, there is a IIS Manager Site Extension to lets you very easily create XDT files. And it will provide some sample XDT's for you.
I am in the process of trying to make the publishing process quicker and simpler for one of our customers, on their sitecore based website. Through research I stumbled upon Merge Replication which might solve some of our issues, but it introduces other issues.
I need your help and guidance to figure out which way is the best!
We've got a CD & CM setup, with 1 CM server which has its own SQL instance. 2 CD servers with a SQL instance each.
At the moment I have the current setup:
CM (Master-, web- and core-database) Web is shown only internally on a secure admin url for the site, this works like a preview site.
CD1 & CD2 are the servers for visiting users, these each have a publishing-target in Sitecore.
When we deploy a release:
1. Deploy new code for CM. Publish templates and potential content changes for Sitecore to Web. Verify and authenticate that everything is correct.
2. Take out CD1 of the Load Balancer, deploy new code for CD1, publish templates and potential changes to Sitecore, verify and authenticate, then put server back into the load balancer.
3. Repeat step 2 for CD2.
4. Deployment done
this process is working OK for us now, we are up and running at all time without downtime on the site.
We've got a few issues with the current setup:
Our search (Elastic search) are being populated when CM publishes to Web, so atm there is an issue with elastic search potentially can have data which is not yet published to the CD servers.
When publishing, the editors could forget to publish to one of the CD servers, which would cause inconsistencies between the servers, which we would like to avoid.
Everything needs to be published multiple times for same environment, takes up time.
Editors do not know what a CD server is, they just want to have a “preview” and “Live” publishing target.
I've looked into the Merge Replication for Sitecore, and actually also have it working in a test environment. The advantage we want from this is that we only have two publishing targets:
Preview (CM server preview database)
Live (CM server web database, which then gets replicated out into the CD servers web databases)
The Elastic search instance will relay on data from CM’s web database, which is live data.
We have can have a Elastic search instance running on preview as well.
The issue here is, that now I can't deploy only for CD1 or CD2, when doing deployment. What if I have breaking changes towards Sitecore? The site will break if I publish new breaking Sitecore items to a server which hasn't been deployed to yet?
How can I get the best of these two worlds? Any?
Do you have an ES for each CD?
If you publish data to a single CD and have a shared ES you will get inconsistency either way.
Else I would make make changes to the publish dialog where only an admin/developer could see the CD servers individually.
Example of normal user:
Preview
Live
Example of admin user:
Preview
Live
CD1
CD2
I am working on a ASP.net MVC4 project where a same project needs to be deployed to many clients on daily basis, each client will have its own domain / sub domain and a separate app pool and db (MSSSQL).
Doing each deployment manually could take at least 1-2 hours if everything goes well. Is there anyway using which I can do this in some automated way?
Moreover, we also need to update all of the apps when a new version is released.. may be one by one or all of them at same time. However, doing this manually could take weeks and once we have more clients then it will not possible doing this update manually.
The update involves, suspending app for some time, taking a full backup of files and db, update application code/ files in app folder, upgrade db with a script and then start app, doing some diagnosis script to check if update was successful or not, if not we need to check what went wrong?
How can we automate this updates? Any idea would be great on how to approach this issue.
As a developer for BuildMaster, I can say that this scenario, known as the "Core Version" pattern, is a common one. If you're OK with a paid solution, you can setup your deployment plans within the tool that do exactly what you described.
As a more concrete example, we experience this exact situation in a slightly different way. BuildMaster has a set of 60+ extensions that rely on a specific SDK version. In our recent 4.0 release, we had to re-deploy every extension because of breaking API changes within the SDK. This is essentially equivalent to having a bunch of customers and deploying to them all at once. We have set up our deployment plans such that any time we create a new release of the SDK application, we have the option to set a variable that says to build every extension that relies on the SDK:
In BuildMaster, the idea is to promote a build (i.e. an immutable object that travels through various environments like Dev, Test, Staging, Prod) to its final environment (where it becomes the deployed build for the release). In your case, this would be pushing your MVC application to its final environment, and that would then trigger the deployments of all dependent applications (i.e. your customers' instances of your application). For our SDK, the plan looks like this:
For your scenario, you would only need the single action, "Promote Build". As I mentioned before, any dependents would then be promoted to their final environments, so all your customer deployments would kick off once that action is run during deployment. As an example, our Azure extension's deployment plan for its final environment looks like this (internal URLs redacted):
You may have noticed that these plans are marked "Shared", which means every extension we have has the exact same deployment plan, but utilizes different variables to handle the minor differences like names, paths, etc.
Since this is such an enormous topic I could go on for ages, but I think that should be sufficient for your use-case if you wanted to try it out.
There are others but you could setup Team Server Foundation to deploy automated builds.
http://msdn.microsoft.com/en-us/library/ff650529.aspx
I find the easiest way to do this from an MVC project is to create a publish profile.
This is done by right-clicking your project selecting publish and then configuring it to your needs.
Then from TFS you create a new build definition, this kicks of a wizard which takes you through it.
There are quite a few options which would be too long to go into for every scenario.
The main change I usually find the most important is to set an MSBuild Argument to deploy with the publish profile.
This can be found at Process > Advanced > MSBuild Arguments.
Once this is configured correctly it's a simple case of right-clicking and queue new build to build and deploy.
You wil need different PublishProfile/Build configuration per deployment environment.
For backups I use a powershell script which can be called manually or from TFS.
You also have a drop folder in TFS which keeps a backup of x many releases.
The datbases are automatically configured via Sql server to backup, TBH I didn't set that up it was a DB admin guy who is also involved with releases.
From a dev testing side I use jMeter (http://jmeter.apache.org/) to run some automated scripts that check that users can login and view certain screens, just to confirm nothing major has gone wrong. However there is usually a testing team to run more detailed tests, again not setup by me.
All of the above will probably take you sometime to setup but in the long run it will literally save you weeks of time over a year.
A free alternative to TFS is http://www.cruisecontrolnet.org/, I have used this in the past too and is pretty good.
You can automate your .Net deployments with Beanstalk, which will give you a way to trigger deployments with a single click, watch progress, manage permissions and see history of deployments. Check out this guide on the topic:
http://guides.beanstalkapp.com/deployments/deploy-dotnet.html
I hope you will find it useful.
P.S. - I work at Beanstalk.
I installed Wordpress using EC2. I created a Load Balancer by creating image (AMI) then adding both Wordpress1 and Wordpress2 on Load Balancer. But I'm still getting database error and have to restart the instances. If I'd like to make 4 instances as Load balancer, are the steps the same? because I saw a "Number of Instances" option when I launched an AMI. Default value is 1. I'm not sure if I should enter 3 or 4 to create multiple instances in one click.
Also, if I update on Wordpress1 instance, will the updates show if the domain will load Wordpress2 instance?
If you want to launch multiple instances and a database etc, you should consider using
AWS CloudFormation. CloudFormation is just a big json string that contains the configuration of your environment, including the servers, autoscaling, access, register with the loadbalancer, etc.
See http://aws.amazon.com/en/cloudformation/ for more details.
There is already an example template for wordpress including a database and autoscaling groups (example wordpress template)
However like datasage mentioned you will need to make adjustments to wordpress to make it working in a multiserver environment.
The "problem" with multiserver environments is that if you upload a file or in your case upgrade wordpress, it will only happen on one server, which could be terminated at any point. Furthermore the upgrade could contain changes in the database structure and then its getting complicated.
If you are building something in the cloud you should always keep in mind that every service you build, in you case the frontend webservers and the database should be allowed to fail without interrupting your service.
Another point is, that you should avoid doing stuff by hand, automation is the key.
An environment where you need to link your server by hand to a loadbalancer is not very useful in the cloud where servers are continuously terminated, rebooted and exchanged.
For you webservers you can use "autoscaling groups" to get this behavior.
If you are using autoscaling groups and a server is terminated or considered unhealthy, a new one will be started automatically and registered with the loadbalancer as soon as it is considered as healthy.
For your database amazon offers for rds multi AZ environments which provide a automatic failover.
Applying upgrades in the cloud can be a tricky and there are different ways to do this. for example using a shared NFS mount with the code base, git deployments or the way you already started: creating a new AMI for every upgrade and then replacing the servers. There are a lot options and they all have their benefits and drawbacks.
As far as i understand you use-case the cloud is maybe not the right choice at the moment.
Normally hosting a small business in the cloud is much more expensive than using a single server. You will only save money if you need like 20 servers in the evening and only 2 or 3 for the rest of the day. Of course there are a lot more points to consider but that would be to much.
Autoscaling in ec2 is horizontal scaling. Which means that instances are added as your infrastructure scales up. This in contrast to vertical scaling where the a single instance is given more resources.
In order to use this effectively, each instance cannot store data that may be needed by other instances. The most common requirement is the database which will need to exist on its own instance outside of the autoscaled instances. You could use RDS for this.
Wordpress also stores file uploads, plugins and themes within the wp-content folder within the wordpress install. By default, if you upload a file, it will be stored on one instance but not any of the others. You could store everything on an NFS volume shared by one of the instances, or you could try a plugin like this: http://wordpress.org/plugins/wp2cloud-wordpress-to-cloud/
To deploy a new version of our website we do the following:
Zip up the new code, and upload it to the server.
On the live server, delete all the live code from the IIS website directory.
Extract the new code zipfile into the now empty IIS directory
This process is all scripted, and happens quite quickly, but there can still be a 10-20 second downtime when the old files are being deleted, and the new files being deployed.
Any suggestions on a 0 second downtime method?
You need 2 servers and a load balancer. Here's in steps:
Turn all traffic on Server 2
Deploy on Server 1
Test Server 1
Turn all traffic on Server 1
Deploy on Server 2
Test Server 2
Turn traffic on both servers
Thing is, even in this case you will still have application restarts and loss of sessions if you are using "sticky sessions". If you have database sessions or a state server, then everything should be fine.
The Microsoft Web Deployment Tool supports this to some degree:
Enables Windows Transactional File
System (TxF) support. When TxF support
is enabled, file operations are
atomic; that is, they either succeed
or fail completely. This ensures data
integrity and prevents data or files
from existing in a "half-way" or
corrupted state. In MS Deploy, TxF is
disabled by default.
It seems the transaction is for the entire sync. Also, TxF is a feature of Windows Server 2008, so this transaction feature will not work with earlier versions.
I believe it's possible to modify your script for 0-downtime using folders as versions and the IIS metabase:
for an existing path/url:
path: \web\app\v2.0\
url: http://app
Copy new (or modified) website to server under
\web\app\v2.1\
Modify IIS metabase to change the website path
from \web\app\2.0\
to \web\app\v2.1\
This method offers the following benefits:
In the event new version has a problem, you can easily rollback to v2.0
To deploy to multiple physical or virtual servers, you could use your script for file deployment. Once all servers have the new version, you can simultaneously change all servers' metabases using the Microsoft Web Deployment Tool.
You can achieve zero downtime deployment on a single server by utilizing Application Request Routing in IIS as a software load balancer between two local IIS sites on different ports. This is known as a blue green deployment strategy where only one of the two sites is available in the load balancer at any given time. Deploy to the site that is "down", warm it up, and bring it into the load balancer (usually by passing a Application Request Routing health check), then take the original site that was up, out of the "pool" (again by making its health check fail).
A full tutorial can be found here.
I went through this recently and the solution I came up with was to have two sites set up in IIS and to switch between them.
For my configuration, I had a web directory for each A and B site like this:
c:\Intranet\Live A\Interface
c:\Intranet\Live B\Interface
In IIS, I have two identical sites (same ports, authentication etc) each with their own application pool. One of the sites is running (A) and the other is stopped (B). the live one also has the live host header.
When it comes to deploy to live, I simply publish to the STOPPED site's location. Because I can access the B site using its port, I can pre-warm the site so the first user doesn't cause an application start. Then using a batch file I copy the live host header to B, stop A and start B.
OK so since everyone is downvoting the answer I wrote way back in 2008*...
I will tell you how we do it now in 2014. We no longer use Web Sites because we are using ASP.NET MVC now.
We certainly do not need a load balancer and two servers to do it, that's fine if you have 3 servers for every website you maintain but it's total overkill for most websites.
Also, we don't rely on the latest wizard from Microsoft - too slow, and too much hidden magic, and too prone to changing its name.
Here's how we do it:
We have a post build step that copies generated DLLs into a 'bin-pub' folder.
We use Beyond Compare (which is excellent**) to verify and sync changed files (over FTP because that is widely supported) up to the production server
We have a secure URL on the website containing a button which copies everything in 'bin-pub' to 'bin' (taking a backup first to enable quick rollback). At this point the app restarts itself. Then our ORM checks if there are any tables or columns that need to be added and creates them.
That is only milliseconds downtime. The app restart can take a second or two but during the restart requests are buffered so there is effectively zero downtime.
The whole deployment process takes anywhere from 5 seconds to 30 minutes, depending how many files are changed and how many changes to review.
This way you do not have to copy an entire website to a different directory but just the bin folder. You also have complete control over the process and know exactly what is changing.
**We always do a quick eyeball of the changes we are deploying - as a last minute double check, so we know what to test and if anything breaks we ready. We use Beyond Compare because it lets you easily diff files over FTP. I would never do this without BC, you have no idea what you are overwriting.
*Scroll to the bottom to see it :( BTW I would no longer recommend Web Sites because they are slower to build and can crash badly with half compiled temp files. We used them in the past because they allowed more agile file-by-file deployment. Very quick to fix a minor issue and you can see exactly what you are deploying (if using Beyond Compare of course - otherwise forget it).
Using Microsoft.Web.Administration's ServerManager class you can develop your own deployment agent.
The trick is to change the PhysicalPath of the VirtualDirectory, which results in an online atomic switch between old and new web apps.
Be aware that this can result in old and new AppDomains executing in parallel!
The problem is how to synchronize changes to databases etc.
By polling for the existence of AppDomains with old or new PhysicalPaths it is possible to detect when the old AppDomain(s) have terminated, and if the new AppDomain(s) have started up.
To force an AppDomain to start you must make an HTTP request (IIS 7.5 supports Autostart feature)
Now you need a way to block requests for the new AppDomain.
I use a named mutex - which is created and owned by the deployment agent, waited on by the Application_Start of the new web app, and then released by the deployment agent once the database updates have been made.
(I use a marker file in the web app to enable the mutex wait behaviour)
Once the new web app is running I delete the marker file.
The only zero downtime methods I can think of involve hosting on at least 2 servers.
I would refine George's answer a bit, as follows, for a single server:
Use a Web Deployment Project to pre-compile the site into a single DLL
Zip up the new site, and upload it to the server
Unzip it to a new folder located in a folder with the right permissions for the site, so the unzipped files inherit the permissions correctly (perhaps e:\web, with subfolders v20090901, v20090916, etc)
Use IIS Manager to change the name of folder containing the site
Keep the old folder around for a while, so you can fallback to it in the event of problems
Step 4 will cause the IIS worker process to recycle.
This is only zero downtime if you're not using InProc sessions; use SQL mode instead if you can (even better, avoid session state entirely).
Of course, it's a little more involved when there are multiple servers and/or database changes....
To expand on sklivvz's answer, which relied on having some kind of load balancer (or just a standby copy on the same server)
Direct all traffic to Site/Server 2
Optionally wait a bit, to ensure that as few users as possible have pending workflows on the deployed version
Deploy to Site/Server 1 and warm it up as much as possible
Execute database migrations transactionally (strive to make this possible)
Immediately direct all traffic to Site/Server 1
Deploy to Site/Server 2
Direct traffic to both sites/servers
It is possible to introduce a bit of smoke testing, by creating a database snapshot/copy, but that's not always feasible.
If possible and needed use "routing differences", such as different tenant URL:s (customerX.myapp.net) or different users, to deploy to an unknowing group of guinea pigs first. If nothing fails, release to everyone.
Since database migrations are involved, rolling back to a previous version is often impossible.
There are ways to make applications play nicer in these scenarios, such as using event queues and playback mechanisms, but since we're talking about deploying changes to something that is in use, there's really no fool proof way.
This is how I do it:
Absolute minimum system requirements:
1 server with
1 load balancer/reverse proxy (e.g. nginx) running on port 80
2 ASP.NET-Core/mono reverse-proxy/fastcgi chroot-jails or docker-containers listening on 2 different TCP ports
(or even just two reverse-proxy applications on 2 different TCP ports without any sandbox)
Workflow:
start transaction myupdate
try
Web-Service: Tell all applications on all web-servers to go into primary read-only mode
Application switch to primary read-only mode, and responds
Web sockets begin notifying all clients
Wait for all applications to respond
wait (custom short interval)
Web-Service: Tell all applications on all web-servers to go into secondary read-only mode
Application switch to secondary read-only mode (data-entry fuse)
Updatedb - secondary read-only mode (switches database to read-only)
Web-Service: Create backup of database
Web-Service: Restore backup to new database
Web-Service: Update new database with new schema
Deploy new application to apt-repository
(for windows, you will have to write your own custom deployment web-service)
ssh into every machine in array_of_new_webapps
run apt-get update
then either
apt-get dist-upgrade
OR
apt-get install <packagename>
OR
apt-get install --only-upgrade <packagename>
depending on what you need
-- This deploys the new application to all new chroots (or servers/VMs)
Test: Test new application under test.domain.xxx
-- everything that fails should throw an exception here
commit myupdate;
Web-Service: Tell all applications to send web-socket request to reload the pages to all clients at time x (+/- random number)
#client: notify of reload and that this causes loss of unsafed data, with option to abort
# time x: Switch load balancer from array_of_old_webapps to array_of_new_webapps
Decomission/Recycle array_of_old_webapps, etc.
catch
rollback myupdate
switch to read-write mode
Web-Service: Tell all applications to send web-socket request to unblock read-only mode
end try
A workaround with no down time and I am regularly using is:
Rename running .NET core application dll to filename.dll.backup
Upload the new .dll (web application is available and serving the requests while file is being uploaded)
Once upload is complete recycle the Application Pool. Either Requires RDP Access to server or function to recycle application pool in your hosting control panel.
IIS overlaps the app pool when recycling so there usually isn’t any downtime during a recycle. So requests still come in without every knowing the app pool has been recycled and the requests are served seamlessly with no downtime.
I am still searching for more better method than this..!! :)
IIS/Windows
After trying every possible solution we use this very simple technique:
IIS application points to a folder /app that is a symlink (!) to /app_green
We deploy the app to /app_blue
We change the symlink to point to /app_blue (the app keeps working)
We recycle the application pool
Zero downtime, but the app does choke for 3-5 seconds (JIT compilation and other initialization tasks)
Someone called it a "poor man's blue-green deployment" without a load balancer.
Nginx/linux
On nginx/linux we use "proper" blue-green deployment:
nginx reverse proxy points to localhost:3000
we deploy to localhost:3001
warmup the localhost:3001
switch the reverse proxy
shot down localhost:3000
(or use docker)
Both windows and linux solutions can be easily automated with powershell/bash scripts and invoked via Github Actions or a similar CD/CI engine.
I would suggest keeping the old files there and simply overwriting them. That way the downtime is limited to single-file overwrite times and there is only ever one file missing at a time.
Not sure this helps in a "web application" though (i think you are saying that's what you're using), which is why we always use "web sites". Also with "web sites" deploying doesn't restart your site and drop all the user sessions.