Docker change wordpress to HTTPS from HTTP - wordpress

I struggle to change configuration of already set-up container to use SSL.
I have VPS with CentOs 8, on which I have 2 containers running, one with Wordpress and other with wordpress DB. Wordpress works fine on port 80. I'd like to enable SSL and move it to 443.
Right now what I did:
Open firewall port 443 for docker trusted interface;
Changed wp-config.php with wp-home and wp-site url to use respectively https protocols;
Added FORCE_SSL_ADMIN in wp-config.php
In stoped container changed hostconfig.json and config.v2.json to use respectively 443 protocols to 80
EDIT 1
Current outcome:
When running curl localhost:443 I got wordpress page returned (from local machine to wordpress), however I do not think it uses https protocol.
Desired outcome:
I'd like to be able to serve wordpress over https for external traffic. Right now I got connection refused message.

If I were you I wouldn't bother doing anymore config at container level, rather I'd use an application server like Nginx or Apache to resolve DNS and redirect incoming traffic to the containers via reverse proxy.
If this is not possible, could you please provide the (I'm assuming) docker-compose file and dockerfiles you have to set up your server?
EDIT:
Don't run the docker container on 443, it can listen internally on any port you want, but map it to something else, like 8080, for the following example config.
nginx.conf
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
server {
listen 80;
server_name example.com;
rewrite ^ https://$host$request_uri permanent;
}
server {
listen 443 ssl http2;
server_name example.com;
ssl on;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/privkey.key;
location / {
proxy_pass http://127.0.0.1:8080;
}
}
}

Related

cloudfront nginx proxy_pass

I have a spa app. It is located in s3 and is proxied using cloudfront.
spa.example.com
For each user, I make a separate domain. Nginx proxies the spa cloud front for each individual domain. Without nginx caching. Nginx itself is also wrapped in cloudfront.
server {
listen 443 ssl;
server_name user1.example.com;
ssl_certificate ...;
ssl_certificate_key ...;
location / {
set $spa spa.example.com;
proxy_pass https://$spa$request_uri;
proxy_ssl_protocols TLSv1.2;
proxy_ssl_server_name on;
}
The question is which is cheaper?
Proxy is now configured like this or for each user its own cloudfront with spa?

Nginx is ignoring www in my rules while I don't want to

What I want:
I want to redirect www.mydomain.eu and mydomain.eu to, let's say, www.google.com, while having access to a local gitea server through git.mydomain.eu.
What I have:
I have this nginx config in /etc/nginx/sites-available:
ssl_certificate /XXX/fullchain.pem;
ssl_certificate_key /XXX/privkey.pem;
server {
listen 443 ssl default_server;
listen 80 default_server;
server_name www.mydomain.eu mydomain.eu;
access_log /var/log/nginx/reverse-access.log;
error_log /var/log/nginx/reverse-error.log;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location / {
return 301 http://google.com;
}
}
server {
listen 443 ssl;
server_name git.mydomain.eu;
access_log /var/log/nginx/reverse-access.log;
error_log /var/log/nginx/reverse-error.log;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location / {
proxy_pass http://localhost:3000;
}
}
with XXX being an absolute location and mydomain being the actual name of my domain (this config file is also in sites-enabled thanks to a "ln -s" command)
What my problem is
When I go to https://mydomain.eu, I am redirected to https://www.google.com/. => great !
When I go to https://www.mydomain.eu, Firefox (and chrome) says that "This site can’t be reached" => :(, different behavior than mydomain.eu, why ?
Same with https://git.mydomain.eu ("This site can’t be reached") => why ? I am sure that http://localhost:3000 is a valid website, as I can access it through its IP address.
It seems that nginx ignores the "www" in my first rule, and I can't figure out why.
This is not related to nginx but your domain host configuration as the net traffic not even reach to your nginx server yet.
In order to be able to access git.example.com, you will need to have a CNAME configured at your host with CNAME host as git, and value as example.com. You also need another one for www, as shown below:
Type Host Value
CNAME git example.com
CNAME www example.com
One more thing to be aware is if you are using a sub-domain like git.example.com, depend on how you configure your ssl certificate and what kind of ssl certificate you purchased, the git.example.com may need a separate ssl certificate, unless you have a multi-site ssl certificate....

NGINX Incorrectly Forwarding Requests to Default Location

I have a React web application that I'm trying to deploy on an AWS EC2 instance and I'm using NGINX. I am trying to set it up so that all http requests get redirected to https. Right now it does appear to be redirecting all http requests to https, but NGINX is forwarding the request to the default path /usr/share/nginx/html/ instead of to the web application that I have running on localhost. I have read dozens of articles and have been trying to figure this out for days. Pointers would be much appreciated. Thanks in advance.
Here is my NGINX server configuration at /etc/nginx/sites-available/default:
server {
listen 80;
if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2 default_server;
listen 443 ssl http2 default_server;
ssl on;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1h;
location / {
proxy_pass http://127.0.0.1:3839;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
Also, the application is running and accessible at the port specified in my location block. I can reach it on the machine with curl 127.0.0.1:3839 with no problems. I am able to see in /var/log/nginx/error.log that nginx is attempting to serve requests out of the /usr/share/nginx/html/ directory which is how I figured out that is the issue. I just have no idea why it's sending requests there instead of to the port on localhost that I specified in my location block. If I go to the root url for my application I get the "Welcome to NGINX" page. If I go to any subpath under my root url like example.com/login, then I get a 404 and error.log shows that it couldn't find resource /usr/share/nginx/html/login for example. Thanks :)
Update:
Inside of the listen 80 server block I added
location / {
proxy_pass http://127.0.0.1:3839;
}
and now it seems to be working correctly, but I have no idea why I would need to define a location block in the listen 80 server definition if requests in that block are just being redirected to be caught by the other server definition listening on 443. Any idea why this is working now?
I figured out the reason that location block in the http server definition worked. In AWS I accidentally had my load balancer forwarding all requests to port 80 on the EC2 instance. So even though my http server definition was redirecting to the https version of the site, those https requests were still ending up being handled by that same http server definition and since it didn't have a location block at all previously, that was causing it to fail. In the end, I removed the location block from the http server definition and then correctly updated my load balancer to forward https requests to port 443 on the EC2 instance and now everything works as expected.

How can I use nginx to forward incoming requests from a certain URL on port 80 to various docker applications

this is my story.
I'm running a Meteor.js app that launches docker containers on the same host machine. Meteor.js is set to run on port 8080; where all http and https requests for "/" are forwarded to. My nginx configuration at /etc/nginx/project/sites-available/site is as follows:
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
server_name **projectdomain.com**;
location / {
rewrite ^ https://$server_name$request_uri? permanent;
}
}
server {
listen 443 ssl spdy;
server_name **projectdomain.com**;
root html;
index index.html;
ssl_certificate /etc/nginx/ssl/project.crt;
ssl_certificate_key /etc/nginx/ssl/project.key;
ssl_stapling on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-$
add_header Strict-Transport-Security "max-age=31536000;";
if ($http_user_agent ~ "MSIE" ) {
return 303 https://browser-update.org/update.html;
}
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $remote_addr;
if ($uri != '/') {
expires 30d;
}
}
}
i want a certain URL, such as projectdomain.com/4200 to point to projectdomain.com:4200, where my docker container would be listening to. I want to do this because the target audience of my project are behind a corporate firewall that does not enable them to access the app running at port 4200. i mean, the docker app runs just fine and is accessible when one's not behind a firewall by heading to projectdomain.com:4200. i just want it bridged over 80 or 443 in compliance with my current nginx settings.
when i do
location /4200 {
proxy_pass http://127.0.0.1:4200;
}
although my docker container is running at 4200, heading to projectdomain.com/4200 gives an nginx 502 error. this probably has something to do with the netstat -tulpn output.
whereas my meteor project seems to run un 127.0.0.1:8080, the docker container shows to be running at :::4200. i think, the reason i get the 502 is because nginx forwards the request at /4200 to 127.0.0.1:4200 where nothing is running (as stated by netstat).
question is, what should i do to make docker run the container at 127.0.0.1:4200 instead of :::4200 , or is there any other approach i should follow?
First you can try nginx proxy image.
nginx-proxy sets up a container running nginx and docker-gen. docker-gen generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
This image can't work with location. But you can use subdomains. Like 4200.projectdomain.com. It is most simple way.
Second you can configure nginx manually.
you need link containers, and configure nginx as described here
First of all, proxy_pass should set IP address on host which container running on, not localhost, 127.0.0.1 neither.
If you have many dynamic port urls, want to map them to upstream ports, use this:
location ~ ^/(\d{4,})$ {
set $p_port $1;
proxy_pass http://HOSTIP:$p_proxy;
}

nginx CORS Issues with MAXCDN and Easydns with Digital Ocean

I am having issues with CORS, specifically with max cdn. CORS was working properly with maxcdn until a few days ago. I have posted my host config and the cors header is included.
I am stumped at this point, as I have done the following to troubleshoot:
Disabled a rocket-cache specific configuration for nginx included in
the server block.
I have changed caching methods - rather than redis-hhvm I have tried
switching over to fcgi-hhvm with rocket cache.
I have disabled rocket cache after clearing it's cache - then purging
the entire cache, and used a third party plugin for wordpress
specifically for linking the cdn.
I am using SNI with SPDY on maxcdn - I have a cert just for the subdomain (cdn.jurisdesk.com). And I am using Digitalocean for hosting.
Below is my current nginx config (everything was working properly until a few days ago which prompted me to speak with maxcdn support - who are great by the way, and extremely knowledgeable when it comes to advanced configurations specifically using nginx).
server {
server_name www.jurisdesk.com;
ssl_certificate_key /path/to/key/foobar.key;
ssl_certificate /path/to/cert/foobar.crt;
listen *:80;
listen *:443 ssl spdy;
listen [::]:80 ipv6only=on;
listen [::]:443 ssl spdy ipv6only=on;
return 301 https://jurisdesk.com$request_uri;
}
server {
server_name jurisdesk.com;
listen *:80;
listen [::]:80;
return 301 https://jurisdesk.com$request_uri;
}
server {
server_name jurisdesk.com;
listen *:443 ssl spdy;
listen [::]:443 ssl spdy;
ssl on;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
ssl_certificate_key /path/to/key/foobar.key;
ssl_certificate /path/to/cert/foobar.crt;
access_log /var/log/nginx/jurisdesk.com.access.log rt_cache_redis;
error_log /var/log/nginx/jurisdesk.com.error.log;
root /var/www/jurisdesk.com/htdocs;
index index.php index.html index.htm;
include common/redis-hhvm.conf;
include rocket-nginx/rocket-nginx.conf;
include common/wpcommon.conf;
include common/locations.conf;
location ~ \.(ttf|ttc|otf|eot|woff|woff2|font.css|css|js)$ {
add_header Access-Control-Allow-Origin "*";
}
}
I have also added CORS to rocket-nginx.conf - as this is something I've been tinkering with lately and reflects a change to my config - however I have also removed the directive to eliminate that as the cause of the problem.

Resources