How to git pull using aws code deploy - aws-code-deploy

Suppose we already have:
Existing infrastructure with few instances in a load balancer
Existing github account and application already deployed in the instances
How can I achieve following using aws code-deploy?
We have multiple commits since last pull to production servers, how can we achieve git pull on production multiple instances using code-deploy.
Pull out an instance from load balancer
git pull
restart/reload server
instance Add instance again to the load balancer
Kindly suggest.
Thanks in advance.

CodeDeploy gives you the option for deploying an application directly from Github.
If you need to build your code before deploying or you are not willing to introduce appspec file in the github repo, you can create a different deployable bundle and put all the commands to pull / build, attach / detatch from load balancer, etc in the hook scripts.
In case you are using AWS Elastic load balancing, we have some sample scripts you can borrow from:
I hope this helps you get set up.
Thanks,
Amartya Datta Gupta

Related

How to configurate on kubernetes to update from git repository?

I just installed bitnami/wordpress image using helm. Is it possible to sync it with git and when I will change some files in this git repository to update kubernetes pods?
I mean update Wordpress source code because Im modyficating plugins in wp-content/plugins dir.
You can use ArgoCD or Flux to automate this types of GitOps workflow. Check their documentation. They are pretty powerful and popular for GitOps in Kubernets.
A possible solution is to use git-sync in a sidecar container. It will periodically pull files down from a repository and copy them to a volume.
Here is a sample manifest which uses git-sync to update the content hosted on a simple nginx web server:
https://github.com/nigelpoulton/ps-vols-and-pods/blob/master/Multi-container-Pods/sidecar.yml
One way I managed it (although possibly a rookie way) was through github actions.
Here's an example of mine
And here's official docs from docker to configure with github actions
You basically want to tell github actions to recreate and push your image and then tell your cluster to refresh like so:
If you're using kubectl to manage your cluster check if your version supports kubectl rollout restart. You can use it to force any deployment to restart and smoothly recreate your pods (it also re-pulls the supporting image).
e.g.: kubectl rollout restart deployment/my_deployment

AWS CodeDeploy --Deploy different web.config

Can someone tell any way to deploy different web.config on different EC2 instances with in same deployment group.
Scenario: We have few entries in the config that will be different on different instances. So need some way to update based on instance.
Create a script to make the necessary changes to your web.config and then use the hooks section of your app.spec file to run the script before install on your deployment. https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html
I actually took the approach of storing my web.config files for each environment in an S3 bucket. As part of the CodeDeploy Deployment group process, it would download the config file from the S3 bucket in the After Install hook. This way you can build the application once and push the same application files for each environment. This also separates the configuration of the application from the actual code so that the development team doesn't need to know things like connection string values, etc...

Deploying 2 apps from the same git using CodeDeploy

We have an app with web and worker nodes - the code for both is in the same git but gets deployed to different autoscaling groups. The problem is that there is only one appspec file, however the deployment scripts (AfterInstall, AppStart, etc.) for the web/worker nodes are different. How would I go about setting my CodeDeploy to deploy both apps and execute different deployment scripts ?
(Right now we have an appspec file that just invokes chef recipes that execute different actions based on the role of the node)
I know this question is very old, but I had the same question/issue recently and found an easy way to make it work.
I have added two appspec files on the same git: appspec-staging.yml, appspec-storybook.yml.
Also added two buildspec files buildspec-staging.yml, buildspec-storybook.yml(AWS CodeBuild allows specify the buildspec file).
The idea is after the build is done, we will copy and rename the specific appspec-xx.yml file to the final appspec.yml file, so finally, in the stage of CodeDeploy, we will have a proper appspec.yml file to deploy. Below command is for linux environment.
post_build:
commands:
- mv appspec-staging.yml appspec.yml
Update - according to an Amazon technical support representative it is not possible.
They recommend having separate gits for different environments (prod,staging,dev,etc.) and different apps.
Makes it harder to share code, but probably doable.
You can make use of environment variables exposed by the agent in your deployment scripts to identify which deployment group is being deployed.
Here's how you can use them https://blogs.aws.amazon.com/application-management/post/Tx1PX2XMPLYPULD/Using-CodeDeploy-Environment-Variables
Thanks,
Surya.
The way I have got around this is to have an appspec.yml.web and an appspec.yml.worker in the root of the project. I then have two jobs in Jenkins; one each that correspond to the worker and the web deployments. In each, it renames the appropriate one to be just appspec.yml and does the bundling to send to codedeploy.

How do you push updates to a deployed meteor app that has a filesystem?

I have an app running on my own digitalocean VM that I'm trying to play around with to figure out how to run a meteor production server. I deployed it with meteor build, but now I'm a bit unsure about how to push updates. If I build a new tarball on my own machine, I will loose file references that my users have made to files in bundle/uploads, because the remote filesystem isn't incorporated into my local project. I can imagine some hacky ways to work around this, but besides hosting the files on s3 or another 3rd party server, is there any way to "hot code push" into the deployed app without needing to move files around on my server?
Am I crazy for wondering what the meteor equivalent of git push/pull is in production, or just ignorant?
You can use dokku (https://github.com/progrium/dokku). DigitalOcean allows you to create an instance pre-installed with dokku too.
Once you've set up your ssh keys, set the environment variables, ROOT_URL, PORT and MONGO_URL you can add that server as a git remote and simply git push to it.
Dokku will automatically build up the Meteor app and have it running, and keep it up to date whenever you git push.
I find Dokku is very convenient. There's also flynn and deis which are able to do the same in multi tenant environment with way more options.
Just one thing to keep in mind with this is to push the guys who own the repo to keep the Node version in the buildpack up to date. Meteor is a bit overzealous when it comes to using the latest version of Node and refusing older versions.
Meteor does lack a bit in this department. I can't remember where I may have heard this, but I believe they intend on adding this very popular Meteor deployment package to their library. Short of switching to a more compatible host, I'm not aware of any better solutions.

Deploying a Symfony 2 application in AWS Opsworks

I want to deploy a php application from a git repository to AWS Opsworks service.
I've setup an App and configured chef cookbooks so it runs the database schema creation, dumping assets etc...
But my application has some user generated files in a sub folder under web root. git repository has a .gitignore file in that folder so an empty folder is there when i run deploy command.
My problem is : after generating some files (by using the site) in that folder, if I run 'deploy' command again 'Opsworks' adds a new release under 'site_name/releases/xxxx' folder and symlink to it from 'site_name/current' folder.
So it makes my previous 'user generated stuff' inaccessible. What is the best solution for this kind of situation?
Thanks in advance for your kind answers.
You have a few different options. Listed below in order of personal preference:
Use Simple Storage Service (S3) to store the files.
Add an Elastic Block Store (EBS) volume to your server and save files to the volume.
Save files to a database (This is something I would not do myself but the option is there.).
When using OpsWorks think of replicable/disposable servers.
What I mean by this is that if you can create one server (call it server A) and then switch to a different one in the same stack (call it server B), the result of using server A or server B should not impact how your application works.
While it may seem like a good idea to save your user generated files in a directory that is common between different versions of your app (every time you deploy a new release directory is generated) when you destroy your server, you run the risk of destroying your files.
Benefits and downsides of using S3?
Benefits:
S3 will give you high redundancy and availability to your files.
S3 is external to your application server so if your server dies or decide to move it to a different region, you can continue using the same s3 bucket.
Application Easy to scale. You could add multiple application servers that read and write files to S3.
Downsides:
You need extra code in you application. You will have to use the AWS API in order to store and retrieve the files. Using the S3 API is not hard but it may require an extra step to get you where you need. Take a look at the "Using an Amazon S3 Bucket" walk through for reference. This is be the code they use to upload the files to the S3 bucket in the example.
Benefits and downsides of using EBS?
Benefits:
EBS is an "external hard drive" that you can easily mount to your machine using the OpsWorks Resource Manager.
EBS volumes can be backed-up and restored.
It may be the fastest option to implement and integrate to your application.
Downsides:
You need to assign it to an instance before it is running.
It could be time consuming to move from server A to server B (downtime may be required).
You can not scale your application horizontally. While you can create copies of the EBS and assign them to different instances, the EBS will not be shared.
Downside of using a database?
Just do a google search on "storing files in database"
Take a look at Storing Images in DB - Yea or Nay?
My preferred choice would be to use S3, but ultimately this is your decision.
Good luck!
EDIT:
Take a look at this repository opsworks-chef-cookbooks it contains some recipes to deploy Symfony2 application on OpsWorks. I have been using it for over a year and works quite well.
Use Chef templates, and use them in a recipe in the opsworks deploy lifecycle event.

Resources