AWS CodeDeploy agent can't deploy code if EC2 instance has no public IP? - aws-code-deploy

I've implemented an AutoScaling Group with a CodeDeploy agent system. Everything works fine if I set up the launch template to starts EC2 instances with a public IPv4, but if I disable this, the CodeDeploy agent is stuck on waiting at the "ApplicationStop" step (the first one) that is never executed.
I suspect that AWS can't access my EC2 properly to communicate with the local CodeDeploy-Agent (which is running fine from the service codedeploy-agent status call).
As soon as I add back a public IPv4, everything works fine.
How can I configure my settings (SecurityGroup? Role?) to allow CodeDeploy to work fine with a private EC2 instance?
Update 1
The EC2 instances have the role "aws-elasticbeanstalk-ec2-role" which contains the following strategy:
AWSElasticBeanstalkWebTier
AmazonS3ReadOnlyAccess
AWSElasticBeanstalkMulticontainerDocker
AWSElasticBeanstalkWorkerTier
Maybe the issue is here: I need to add a "CodeDeploy" strategy, but when trying to attach one related to CodeDeploy, there are many and I'm not sure which one to use to have

Related

AWS launchpad on Bitnami

I launched Wordpress stack for AWS launchpad on Bitnami. The instance shows its state as running in the EC2 console. I tried logging in via SSH, it doesn't connect. Also, if I try its public IP in the browser, it shows that the site could not be reached. I'm stuck on it from last 2 hours. Any help?
It looks to me that you have a firewall issue when connecting to the instance. I advise you to check if that is the case. In addition, I would also try restarting the instance or launching a new one to double check it.

AWS CodeDeploy vs Windows 2016 in ASG

I use AWS CodeDeploy to deploy builds from GitHub to EC2 instances in AutoScaling Group.
It's working fine for Windows 2012 R2 with all Deployment configurations.
But for Windows 2016 it totally fails on "OneAtTime" deploy;
During "AllAtOnce" deploy only one or two instances deployed successfully, all other fails.
In the logfile on agent this suspicious message is present:
ERROR [codedeploy-agent(1104)]: CodeDeploy Instance Agent Service: CodeDeploy Instance Agent Service: error during start or run: Errno::ETIMEDOUT
- A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. - connect(2)
All policies, roles, software, builds and other stuff are the same, I even tested this on brand new AWS account.
Does anybody faced such behaviour?
I ran into the same problem, but during my investigation, I found out that server's route table had wrong routes for 169.254.169.254 network (there was specified the gateway from the network where my template was captured), so that it couldn't read instance metadata.
From the above error it looks like the agent isn't able to talk to CodeDeploy endpoint after instance starts up. Please check if the routing tables and other proxy related settings are set up correctly. Also if you do not have it already, you can turn on the debug log by setting :verbose to true in the agent config and restart the agent. This would help debug the issue better.

woocommerce webhooks not firing

woocommerce webhooks aren't firing at all for me, even on a fresh install. I did the following:
Create a new MySQL database
Install WP from the zip file.
Set up WP.
Install Woocommerce.
Enable REST API and create a key.
Added "Coupon created" webhook, made sure it's set to active, and set it to a publicly accessible site.
When I create a coupon, the webhook does not fire, and no entry is created in the log. I tried this with orders as well and also doesn't work.
I think it's a machine configuration problem, but not sure what to change. The machine is an EC2 instance and has all ports opened in its security group policy.
Weirdest of all is that on a different EC2 instance does work, but it's a production machine and I want to have a dev server work so I can test out things. The only config differences between the production and dev machines that I can think of are the subnets and the firewall, but I don't understand why the subnet should matter and I opened all the firewall ports on the dev machine.
what Linux distributions are you running for prod and dev?
CentOS with SELinux enabled with not allow HTTPD scripts and modules to connect to network by default.
setsebool -P httpd_can_network_connect on
If above is not valid, please identify network problems by trying connecting to AWS RDS via SSH CLI. If you can open a connection via SSH CLI, the problem will be with your application. If you can't, it will be network problem. First thing to check in that case is AWS RDS security group. For testing you can open 3306 to public.
Let me know how it goes.

Automatic configure wordpress IP on EC2

I have installed a LAMP server on an EC2 instance. Then I created an AMI so that I can easily spin up instances in the future.
Today I went back to spin up one such instance, and to my surprise the IP in the configuration is wrong. Basically when I first installed the LAMP server, Wordpress detected the IP and configured accordingly. Now on the instance that I launched today the IP is different, but the configuration for the previous IP is still there.
Now, I know how to change Wordpress IP. My question is: How can I make this step automatic when I launch an EC2 instance from an AMI?
Thanks
Instance Metadata will give you a lot of information about the current EC2 instace. You can use that + some hand-crafted shell scripts which will be triggered on boot to update configuration.
An alternative solution is to use some configuration management tool (Chef, Ansible ... ). To help you configure the application.

Intergration of Docker with OpenStack via Docker Heat Plugin

I'm trying to integrate Docker with OpenStack (icehouse) via the Docker-Heat Pluigin and I'm facing a problem.
OpenStack is configured according to the tutorial by OpenStack for Ubuntu. I'm using a controller node and a compute node (just the 2 nodes) with the legacy nova-networking.
Things to keep in mind:
Controller Node: 1 network interface - management interface
Compute Node : 2 network interfaces - management interface and the external interface (vm instance have ips of the same subnet of that external interface)
With OpenStack everything works perfect except (which might be the problem I'm facing for dockers)
1- You can't reach (ping) the deployed vm instances from the controller node [makes sense, i think no problem in that one]
2- You can't reach (ping) the deployed vm instances from the compute node (ping: operation not permitted) [might be the issue] - but you can ping from a vm instance to the compute node
3- The virtual machines themselves don't see each others [but i think doesn't have relation to the issue im facing]
For Dockers, the plug-in is installed. I assume perfect since the syntax for Dockers DockerInc::Docker ... is accepted but when I try to run the example posted in the Docker blog - making the adjustments required - the compute instance is created but the docker container is not. Im having this error:
When i try it as a user with admin role
MissingSchema: Invalid URL u'192.168.122.26/v1.9/containers/None/json': No schema supplied. Perhaps you meant http:/ /192.168.122.26/v1.9/containers/None/json
When i try it as a user with just a member role
MissingSchema: Invalid URL u'192.168.122.26/v1.9/containers/create': No schema supplied. Perhaps you meant http:/ /192.168.122.26/v1.9/containers/create
Notes:
192.168.122.26 is the ip of the created vm instance.
I've tried not only with cirros but also coreos and ubunto-precise (same error)
Docker itsself is installed on both Controller and Compute.
Docker plugin and its requirements are only installed on the controller node
Finally, both the controller and the compute nodes run as virtual machines themselves
I would be really glad if you had an idea. Thanks for your time,
Kindest Regards,
M. El Sioufy
My guess is that you haven't allowed communication to the VMs from the outside world (which the controller and/or the compute node will be from the VM's point of view). By default, communications from VMs to the outside world are allowed, but not inbound to the VMs. Try adding an "allow all TCP" rule to the default security group of the tenant that the VMs live in. This may fix your HTTP timeout.

Resources