Restarting applications using Amazon CodeDeploy - aws-code-deploy

We occasionally have the need to restart services that are deployed with AWS CodeDeploy. Is it possible to have the CodeDeploy agent do this directly, without having to create a new deployment?

The AWS service you're looking for is AWS Systems Manager. You can run arbitrary commands or scripts on instances with this2. All recent Ubuntu and Amazon Linux instances have AWS SSM agent installed. But if you have an older instance, you'll have to install the SSM agent manually or through your configuration manager.

No, you need to have a deployment to restart. The agent does not take actions on its own. It receives commands from the CodeDeploy service.
Depending on your usecase you can have your application emit CloudWatch event and have that trigger a deployment in the deploymentGroup. Note that it will create a deployment that will deploy to the entire fleet.

To expand on eternaltyro's answer, you could leverage CodeDeploy's CLI tool via SSM to run the same CodeDeploy event hooks that were/are used to start and stop your application.

Related

Deploy Airflow 2.0+ on Azure

I have started learning/trying Airflow. Part of my research, I have installed airflow in my local through Docker. I referred official page Airflow Install Docker
I am looking for a (standard) process when I can deploy Airflow to Azure.
Can I directly use same docker-compose file for that ?
Any help will be appreciated.
Likely the easiest way is to use AKS (Azure Kubernetes Service) and use the official Helm Chart of Apache Airflow from the Apache Airflow community to deploy Airflow on it:
https://airflow.apache.org/docs/helm-chart/stable/index.html
I reckon, Installing Airflow through Docker-compose.yml on Azure wouldn't be the right approach. It is probably encouraged for local installations only.
The way we have set this up in Azure is like this:
Database: Azure Database for PostgreSQL server
Webserver + Scheduler: Azure App Service on a Linux App Service Plan
Docker: Azure Container Registry
Build the Docker image (either locally or in CICD pipeline)
Push the Docker image to Azure Container Registry
Setup Postgres server and create airflow database
Setup Azure App Service to pull Docker image from ACR (You can also setup continuous integration). Remember to configure Environment variables.
Obviously, as Jarek mentioned, you could also go with setting up AKS.
You can try the preconfigured images of Kubernertes AirFlow2 packaged by Bitnami. And perform the Deploy in AKS
bitnami.airflow

codedeploy user profile - Windows

Does AWS codedeploy run as any specific user profile on Windows servers?
I am trying to run a jfrog.exe cli to download from a remote Artifactory repository, using a prebuilt user ID and password which is dependent on ~/users/{user id}/.jfrog/jfrog-cli.conf
How would codedeploy be able to source a .jfrog/jfrog-cli.conf ?
The CodeDeploy agent is configured as service to use the SYSTEM account as logon identity. Because of that all processes execute as the SYSTEM user. As a word of caution, the process cmd and powershell processes launched by agent are in 32bit mode. This is important to understand because
Powershell 32bit and 64bit have differences, especially when there is a dependency to installed modules.
Some things are different with the SYSTEM user. For example the temp directory is not the same as with for every other user.
Knowing this helps a lot when trying to troubleshoot.
CodeDeploy now only support repo in S3 or Github. Where does your repo exists?

Deploy ASP.NET Application to AWS from Visual Studio Team Service

i need some advice about continuous deployment within Visual Studio Team Service. To be honest, i am quite new in this area, so forgive this silly question because i can't find any reference for AWS but only Azure.
My idea is i can deploy asp.net application to AWS EC2 which is built from VSTS source control.
My current scenario is:
I had source control which contain asp.net application code inside VSTS.
I created build definition which build the source code and produce artifact.
I created release definition, which copy artifact to remote AWS EC2 instance.
....
I don't have any idea to continue the next step, could you give advice what i should do next ? Or any better scenario ?
Thank You.
Currently I don't see any tasks which can directly deploy to AWS, so the only way this seems possible if you create your own task or use powershell or bash along with AWS cli to deploy your artifact. The process would be something like this
Download the artifact in a release. This is default if you link the artifact.
Make sure the agent machine that you are using has AWS CLI for Powershell or AWS Shell if you are using bash.
You can then write a powershell or bash script which will utilize aws cli to deploy your artifact to AWS.
For anyone else wondering about this in the future, AWS just released the AWS Tools for VSTS to the Visual Studio Marketplace. These tools contain a number of tasks you can use to work with AWS services such as S3, CodeDeploy, Elastic Beanstalk, Lambda and CloudFormation from within a VSTS or TFS environmemt.
We also just published a blog post about using the tools to publish ASP.NET and ASP.NET Core applications to AWS from within VSTS.
There are couple of options for you. A tutorial to explain how to get this running is given below.
How to Build a CI/CD Pipeline Using AWS CodeDeploy and Microsoft Team Foundation Server (TFS)
(For hybrid/complex deployments, you can use this. You can deploy IIS websites, MSI packages, services, exe). The beauty of this is that with a single deployment you can deploy to both on premises and cloud environment.
https://www.youtube.com/watch?v=MIE0P3m9eEY
How to Integrate AWS Elastic Beanstalk with Microsoft Team Foundation Server (TFS) or (VSTS)
(for IIS websites/batch jobs you can use this)
https://www.youtube.com/watch?v=nRLZZefLDqU
How to Integrate AWS Cloudformation with Microsoft Team Foundation Server (TFS)
(fully infrastructure automation and manage infrastructure as code)
https://www.youtube.com/watch?v=WU93NJT0_3s

How does Meteor Up work?

I recently created a droplet on Digital Ocean, and then just used Meteor Up to deploy my site to it.
As awesome as it was to not have to mess with all of the details, I'm feeling a little worried and out of the loop about what's happening with my server.
For example, I was using the console management that Digital Ocean provides, and I tried to use the meteor mongo command to investigate what was happening with my database. It just errored, with command not found: meteor.
I know my database works, since records are persistent across accesses, but it seems like Meteor Up accomplished this without retaining any of the testing and development interfaces I grew used to on my own machine.
What does it do??? And how can I get a closer look at things going on behind the scenes?
Meteor Up installs your application to the remote server, but does not install the global meteor command-line utilities.
For those, simply run curl https://install.meteor.com | /bin/sh.
MUP does a few things. Note that this MUP is currently under active development and some of this process will likely change soon. The new version will manage deployment via Docker, add support for meteor build options, and other cool stuff. Notes on the development version (mupx) can be found here: https://github.com/arunoda/meteor-up/tree/mupx.
mup setup installs (depending on your mup.json file) Node, PhantomJS, MongoDB, and stud (for SSL support). It also installs the shell script to setup your environment variables, as well as your upstart configuration file.
mup deploy runs meteor build on your local machine to package your meteor app as a bundled and zipped node app for deployment. It then copies the packaged app to the remote server, unbundles it, installs npm modules, and runs as a node app.
Note that meteor build packages your app in production mode rather than the debug mode that runs by default on localhost when you call meteor or meteor run. The next version of MUP will have a buildOptions property in mup.json that you can use to set the debug and mobileSettings options when you deploy.
Also, since your app is running directly via Node (rather than Meteor), meteor mongo won't work. Instead, you need to ssh into the remote server and call mongo appName.
From there, #SLaks is right about how it sets things up on the server (from https://github.com/arunoda/meteor-up#server-setup-details):
This is how Meteor Up will configure the server for you based on the given appName or using "meteor" as default appName. This information will help you customize the server for your needs.
your app lives at /opt/<appName>/app
mup uses upstart with a config file at /etc/init/<appName>.conf
you can start and stop the app with upstart: start <appName> and stop <appName>
logs are located at: /var/log/upstart/<appName>.log
MongoDB installed and bound to the local interface (cannot access from the outside)
the database is named <appName>

How is Jenkins helpful to automate deployment process

Can anyone provide insights of using Jenkins for automating deployment under controlled and uncontrolled enviroments. We have different environments - dev/qa/uat/prod and currently we are using batch files that call msbuild/nant scripts to deploy on web and DB servers (web farm). Developers only have access to dev/qa and production support will deploy on uat/prod. Prod. support will get the source code from SVN tag folder and run the batch file to deploy the application.
By using Jenkins, is it possible to eliminate the step of prod. support team getting the script from SVN by running the jobs using their credentials via url. And what is the general practice using source control and CI tool for deploying applications.
My recommendation is to reserve Jenkins for just building the software. That way the user of Jenkins only have access to development and perhaps QA systems.
To decouple the build system from the process that deploys the software I recommend the use of a binary repository manager like:
Nexus
Artifactory
Archiva
In that way deployment scripts could retrieve any version of a previous build. The use of a repository manager would enable your QA team to certify a release prior to it's deployment onto production.
Finally, consider one of the emerging deployment automation tools. Tools like Chef, Puppet, Rundeck can be used to further version control the configuration of your infrastructure.

Resources