How to migrate existing domain with ssl certificate from CentOS/Apache to Docker/Nginx? - nginx

We have a site running on CentOS/PHP/Apache stack. We want to migrate the whole site to Docker/PHP-FPM/Nginx using docker-compose.
So far we've set up plans for migrating pretty much everything except the domain and the existing ssl certificate .
How do we go about this ?
Nginx is up and running on port 80
ports:
- '9007:80'
How can we redirect the existing domain to the docker container and also use the existing ssl certificate ?

No need for the hassle, someone already did the work for you:
https://github.com/evertramos/docker-compose-letsencrypt-nginx-proxy-companion
Its a fully configured auto-ssl docker, which does basically exactly what you need. Start your Website-Container with the following additional parameters (from the git-repo):
docker run -d -e VIRTUAL_HOST=your.domain.com \
-e LETSENCRYPT_HOST=your.domain.com \
-e LETSENCRYPT_EMAIL=your.email#your.domain.com \
--network=webproxy \
--name my_app \
httpd:alpine
I can only recommend it, its a great solution for hosting multiple projects on one server.

Related

Web App for Containers - Wordpress Https

I can't seem to get Web App for Containers (S1) to deploy a Wordpress image from Azure Container Instance with HTTPS working for the admin section. The wp-config.php configuration file are taken from the samples on github provided by microsoft and the Dockerfile is extended from wordpress:4.9.5-php7.2-apache
# Pull image from official source with version specified
FROM wordpress:4.9.5-php7.2-apache
# Overwrite Wordpress configuration
COPY ./wp-config.php /usr/src/wordpress/
# Add permissions needed for wordpress to run
RUN chown -R www-data:www-data /usr/src/wordpress/
WORKDIR /var/www/html
I can build the image, push it, and deploy it to Web App for Containers, but when I try to log into the admin portal using https I am redirected to the non-https login.
The docker logs on Web App during container invokation looks like below
2018-04-23 07:57:21.751 INFO - Starting container for site
2018-04-23 07:57:21.751 INFO - docker run -d -p 58688:80 --name my-test-website__c20c_2 -e WEBSITE_SITE_NAME=my-test-website-name -e WEBSITE_AUTH_ENABLED=False -e PORT=80 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_INSTANCE_ID=...3cfaeb147447885bccba4565fb6192f -e HTTP_LOGGING_ENABLED=1 myacrregsitryhere.azurecr.io/wordpressdocker:21483
Things that I have tried:
allow http/s in wp-config like so:
define('WP_HOME', '//'. filter_input(INPUT_SERVER, 'HTTP_HOST', FILTER_SANITIZE_STRING));
define('WP_SITEURL', '//'. filter_input(INPUT_SERVER, 'HTTP_HOST', FILTER_SANITIZE_STRING));
define('WP_CONTENT_URL', '/wp-content');
define('DOMAIN_CURRENT_SITE', filter_input(INPUT_SERVER, 'HTTP_HOST', FILTER_SANITIZE_STRING));
which results in redirect loop that is stopped by the browser.
Azure Web app enforce https
results in redirect loop that is stopped by the browser.
Enforce ssl via wp-config.php
define('FORCE_SSL_ADMIN', true);
How am I supposed to get https working with slots in Azure Web App for Containers?
You can enforce SSL for that web app in the portal, i.e. as per https://learn.microsoft.com/en-gb/azure/app-service/app-service-web-tutorial-custom-ssl#enforce-https

How to Install SSL on AWS EC2 WordPress Site

I've created and launched my WordPress site on AWS using EC2. I followed this tutorial to create the site. Its currently mapped to a domain using Route 53. All development on the site is done online in my instance.
I would now like to install an SSL Certificate on my site. How would I do so?
If you created WordPress on AWS using "Bitnami",
you may ssh to your instance and run:
sudo /opt/bitnami/bncert-tool
See bitnami docs for details
If you're looking for easy and free solution, try https://letsencrypt.org/. They have a easy to follow doc for anyone.
TLDR; Head to https://certbot.eff.org/, choose your OS and server type and they will give you 4-5 line installation to install certificate automatically.
Before attempting, make sure your domain name is correctly pointed to your EC2 using Route53 or Elastic IP.
For example, here's all you need to run to automatically get and install SSL on a Ubuntu EC2 running nginx
$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-nginx
Best of luck!
This tutorial provides a simple 3 step guide to setting up your Wordpress on AWS using LetsEncrypt / Certbot:
https://blog.brainycheetah.com/index.php/2018/11/02/wordpress-switching-to-https-ssl-hosted-on-aws/
Step 1: Get SSl certificate
Step 2: Configure redirects
Step 3: Update firewall
At each stage replace 'example.com' with your own site address.
Install certbot:
$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-apache
Create certificates:
$ sudo certbot --apache -m admin#example.com -d example.com -d www.example.com
To configure redirects, first open the wp-config file:
$ sudo vim /var/www/html/example.com/wp-config.php
Insert the following above the "stop editing" comment line:
// HTTPS configuration
define('WP_HOME','https://example.com');
define('WP_SITEURL','https://example.com');
define('FORCE_SSL_ADMIN', true);
And finally, update firewall via the AWS console:
Login to your AWS control panel for your EC2 / Lightsail instance
Select the Networking tab Within the Firewall section, just below
the table
Select Add another
Custom and TCP should be pre-populated within the first two fields by default, leave these as they are
Within the Port range field enter 443 Select Save
Then just reload your apache config:
sudo service apache2 reload
And you should be good to go.
According to the Tutorial, since you have configured only an EC2 instance, direct approach is to purchase a SSL certificate and install it into apache server. For detailed steps follow the tutorial
HOW TO ADD SSL AND HTTPS IN WORDPRESS
How to Add SSL and HTTPS in WordPress.
If you plan to use AWS Certificate Manager issued free SSL certificates, then it requires either to configure a Elastic Load Balancer or the CDN CloudFront. This can get complicated if you are new to AWS. If you plan to give it a try with AWS Cloudfront, follow the steps in How To Use Your Own Secure Domain with CloudFront.
Using Cloudfront also provides a boost in performance since it caches your content and reduces the load from your EC2 instance. However one of the challenges you will face is to avoid mixcontent issues. There are WordPress plugins that are capable of resolving mixcontent issues, so do try them out.
This is how I enabled SSL on my WordPress website.
I have used the Lets Encyprpt X.509 Certificates. Lets Encrypt is a certificate authority that provides x.509 Certificates in an automated fashion for free. You can find more information about lets encrypt [here][2]
Steps to follow:
SSH into the instance and switch to root.
Download Certbot
wget https://dl.eff.org/certbot-auto
Chmod a+x certbot-auto
Run certbot to fetch the certificates
sudo ./certbot-auto --debug -v --server https://acme-v01.api.letsencrypt.org/directory certonly -d "your-domain-name"
A wizard would be launched asking you select options for Apache, WebRoot, and Standalone. Select the WebRoot option and continue.Note the directory of your domain
Usually /var/www/html will be your directory for your domain. After success you will have three certificates in the following paths
Certificate: /etc/letsencrypt/live/<<<"Domain-Name">>>/cert.pem
Full Chain: /etc/letsencrypt/live/<<<"Domain-Name">>>/fullchain.pem
Private Key: /etc/letsencrypt/live/<<<"Domain-Name">>>/privkey.pem
Copy the pem file paths to /etc/httpd/conf.d/ssl.conf. Then restart the apache
Service httpd restart
And Finally, I have enabled the Really Simple SSL Plugin in wordpress. Thats it!

How to setup a simple reverse proxy in docker?

I am new in docker. I have of applications running on multiple container. Now, I would like to publish all my apps. What I am planning to do is do make a cluster containning all my application. I want at least 4 containers.
Nginx container that is facing internet like a reverse proxy. He is responsible to redirect traffic to other containers, since there are not directly accessible through internet.
Node_js container that publishes a web in nodejs (http://www.node-app.me).
java_EE container that publishes Java EE application (http://www.java_ee-app.me).
Django container that publishes a Django application(http://www.django-app.me).
This is the idear I have, but I don't no how to set nginx container to play the proxy role and make the request to the correct container so that if user send a request like http://www.node-app.me, the container nginx will return result from Node_js, and so on. Can you please give idear on where to start ?
The setup could look like this (sorry I am not very good at drawing) :
Unless you have a specific need for nginx, I suggest you use Træfik to do the reverse proxy. It can be configured to dynamically pick up reverse proxy rules via labels on your containers. Here's a basic example.
First, create a common network for Træfik and your three containers.
docker network create traefik
Run Træfik with port 80 exposed and the docker backend enabled.
docker run --name traefik \
-p 80:80 \
-v /var/run/docker.sock:/var/run/docker.sock \
--network traefik \
traefik:1.2.3-alpine \
--entryPoints='Name:http Address::80' \
--docker \
--docker.watch
Run your three services with the appropriate labels. Make sure they share a common network with Træfik so that Træfik can reach it. The node_js one might look something like this.
docker run --name node_js \
--network traefik \
--label 'traefik.frontend.rule=Host:www.node-app.me' \
--label 'traefik.frontend.entryPoints=http' \
--label 'traefik.port=80' \
--label 'traefik.protocol=http' \
your_node_js_image
Træfik will dynamically create a frontend rule that matches on the Host header for www.node-app.me when it sees this container running. The traefik.port and traefik.protocol labels let Træfik know how to communicate with your container.
See the documentation for Træfik's Docker backend for more options and details.

Communication between two containers on the same host

The idea is simple, I need to send a signal from a container to another one to restart nginx.
Connect to the nginx container from the first one in ssh is a good solution?
Do you have other recommended ways for this?
I don't recommend installing ssh, Docker containers are not virtual machines, And should respect microservices architecture to benefit from many advantages it provides.
In order to send signal from one container to another, You can use docker API.
First you need to share /var/run/docker.sock between required containers.
docker run -d --name control -v /var/run/docker.sock:/var/run/docker.sock <Control Container>
to send signal to a container named nginx you can do the following:
echo -e "POST /containers/nginx/kill?signal=HUP HTTP/1.0\r\n" | \
nc -U /var/run/docker.sock
Another option is using a custom image, with a custom script, that checks nginx config files and if the hash is changed sends reload signal. This way, each time you change config, nginx will automatically reload, or You can reload manually using comments. these kind of scripts are common among kubernetes users. Following is an example:
nginx "$#"
oldcksum=`cksum /etc/nginx/conf.d/default.conf`
inotifywait -e modify,move,create,delete -mr --timefmt '%d/%m/%y %H:%M' --format '%T' \
/etc/nginx/conf.d/ | while read date time; do
newcksum=`cksum /etc/nginx/conf.d/default.conf`
if [ "$newcksum" != "$oldcksum" ]; then
echo "At ${time} on ${date}, config file update detected."
oldcksum=$newcksum
nginx -s reload
fi
done
Don't forget to install inotifywait package.

Docker run results in "host not found in upstream" error

I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly

Resources