I'm new to AWS CI/CD. We current have one wordpress website running on two AWS EC2 instances, the live site is on the AWS EC2 instance for live, and the staging one is with the development EC2 instance and I put part of my codes on Github, getting rid of files like plugins. The github repository has two branches, one is development and another is Master. I current want to create one pipeline so once I push code to development branch, it will auto update the code of the staging site and once I merge the development with the Master branch, the code on the live site will be updated.
This is not the new instance from AWS elastic beanstalk at the beginning, so can I set up the AWS pipeline on the exsiting EC2 instances? and will that overwrite the other files not tracked by Git? I don't want those plugins files overwriten when I set up the pipeline.
If they're all possible, how should I set up it? Anyone can give me a logic brief?
I current want to create one pipeline
Sadly you can't do this. You need one CodePipeline (CP) for each branch. Thus you need two CPs, one for master branch and second for the dev branch.
This is not the new instance from AWS elastic beanstalk at the beginning, so can I set up the AWS pipeline on the existing EC2 instances?
Yes, you have to use CP's Elastic Beanstalk (EB) deploy action provider. Since you have two EB environments, each CPs will deploy to its respective EB instance (one for master, and second for dev).
and will that overwrite the other files not tracked by Git? I don't want those plugins files overwriten when I set up the pipeline.
Not sure what do you mean, but during deployment everything that is in the application folder on EB (/var/app/current) will be deleted and replaced with new version of your application.
Related
I have a dockerized ASP.NET web api that I am running on AWS. I am planning on using RDS for the database, and I need to run migrations and I am unsure how I should go about this. My docker container only contains the dotnet runtime, so I can't just SSH into one of the machines and migrate. The RDS instance is set to only accept traffic from within the VPC, so I can't just run them from my machine. What would be the best way to run EF Core migrations into RDS?
I was thinking of maybe setting up a temporary EC2 machine, installing the dotnet SDK, EF Core and the source code, then running migrations and tearing it down. But I don't know if this is a good idea, or whether there is a better way.
A temporary EC2 instance for performing this sort of thing is fine, and a common practice.
I would suggest an alternative of building an AWS CodeBuild job to perform the migration task. However you might find your temporary EC2 instance useful for other things, like connecting to the database to perform ad hoc queries.
What is the best process of dotnet deployment?
Starting from Code checkin to Jenkins and to AWS server.
FYI: We have multiple AWS servers which sync via DFS. So, currently we are deploying only to 1 server and DFS sync it to other servers.
Few questions:
Should we recycle app pool after every deployment? Is is necessary?
What about packages? Should we checkin with code or restore them at build server?
What about T4 templates? Currently, we checkin the auto-generated code as well because without installing visual studio at build server we can't auto-generate T4 templates.
After few months we will be using webpacks as well.
This deployment is used to regenerate existing 10,000 pages which has Output cache enabled. Also, these 10,000 pages are behind AWS CloudFront. This deployment and 10,000 pages share the same app-pool. What happens to Output cache after deployment? Should we have a separate app-pool and why?
FYI:
This deployment is only used for internal staff majorly. So, not much traffic to it.
We have a website hosting with AWS Elastic Beanstalk and are using the eb deploy command to upload changes, etc... The issue I am having is it seems to overwrite any files that were uploaded using the WordPress dashboard when I deploy new changes. I tried adding wp-content/uploads to my .ebignore but then all images on the website are dead. Is there a way to not overwrite this folder at all?
You should never upload files to a server running on Elastic Beanstalk. Those files will be lost at some point, either during a deployment or a scale-in event or something else. The only method of making changes to your EB server should be through the eb deploy command. In addition that method of storing files will not work at all once you scale up your EB environment to multiple servers.
You should be utilizing the AWS S3 service for image storage. There are several Wordpress plugins that facilitate storage of images on S3.
My ASP.Net site runs as farm of Windows EC2 web servers. Due to a recent traffic surge, I switched to Spot instances to control costs. Spot instances are created from an AMI when hourly rates are below a set rate. The web servers do not store any data, so creating and terminating them on the fly is not an issue. So far the website has been running fine.
The problem is deploying updates. The application is updated most days.
Before the switch to a Spot fleet, updates were deployed as follows (1) a CI server would build and deploy the site to a staging server (2) I would do a staggered deployment to a web farm using a simple xcopy of files to mapped drives.
After switching to Spot instances, the process is: (1) {no change} (2) deploy the update to one of the spot instances (3) create a new AMI from that deployment (4) request a new Spot fleet using the new AMI (5) terminate the old Spot fleet. (The AMI used for a Spot request cannot be changed.)
Is there a way to simplify this process by enabling the nodes to either self-configure or use a shared drive (as Microsoft Azure does)? The site is running the Umbraco CMS, which supports multiple instances from the physical location, but I ran into security errors trying to run a .Net application from a network share.
Bonus question: how can I auto-add new Spot instances to the load balancer? Presumably if there was a script which fetched the latest version of the application, it could add the instance to the load balancer when it is done.
I have somewhat similar setup (except I don't use spot instances and I have linux machines), here is the general idea:
CI creates latest.package.zip and uploads it to designated s3 bucket
CI sequentially triggers the update script on current live instances which downloads the latest package from S3 and installs/restarts service
New instances are launched in Autoscaling group, attached to Load balancer, with IAM role to allow access to S3 bucket and User data script that will trigger update script on initial boot.
This should all be doable with windows spot instances I think.
We are small wordpress focused web development company planning to migrate to OpenShift by RedHat.
My goal is to have production environments (apps) in the cloud and most of the development is done in local laptops using OpenShift origin and then deployed as staging apps to private OpenShift installation and when approved back to cloud replacing original app. It would be extra if all team could be able to edit app simultaneously in the dev version of the app (in the cloud).
The problem I noticed is that web development often requires many edits when tweaking CSS and such and commits to OpenShift takes more than 10 seconds.
Hot deploy (https://www.openshift.com/kb/kb-e1057-how-can-i-deploy-my-application-without-having-to-restart-it) speeds up process a bit, but not enough.
Another option is to SCP/SFTP to local OpenShift installation and edit files bypassing git and build process. That causes git to be off sync, but it can be fixed (http://druss.pp.ua/2013/11/synchronize-openshift-application-after-update/)
How ever, process isn't that smooth as i hoped it to be. Any improvement ideas?
I prefer to keep my plugins & themes in git. That allows me to run a copy of my wordpress site locally for development, then add my changes into git, do a git push, and have the production site updated. This would require a minimal change to your files you have now if you used the OpenShift quickstart. I can provide details if needed.