My site gives error 521 all the times.
When I found this error from my server
$sudo service varnish reload
* Reloading HTTP accelerator varnishd
Connection failed (localhost:6082)
Error: vcl.load 8d6fb6be-9a0a-4896-be47-e2678e3c2617 /etc/varnish/default.vcl failed
Moreover,
varnishlog
shows nothing.
I am following this tutorial to set the server up. And, I changed
DAEMON_OPTS="-a :80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-u www-data -g www-data \
-S /etc/varnish/secret \
-s malloc,256m"
The /etc/varnish/default.vcl file is copied from the tutorial. All & has been corrected to &.
It is a fresh VPS. No firewall.
Any clue to resolve it?
Thanks!!!!
3 things come into my mind:
Start varnish in foreground mode and check what it says
varnishd -F -a :80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-u www-data -g www-data \
-S /etc/varnish/secret \
-s malloc,256m
Try changing -T localhost:6082 to -T 127.0.0.1:6082
Your port 6082 might be already taken. Change it or check if it's listed in already open ports' list with
netstat -tlnep
restart your varnish
sudo /etc/init.d/varnish restart
then
sudo /etc/init.d/varnish reload
Related
With docker I was able to run WordPress example for docker-compose on nearly every platform, without prior docker knowledge.
I look for a way to achieve the same with Podman.
In my case, to have a fast cross-platform way to setup a working WordPress installation for development.
As Podman is far younger, a valid answer in 2022 would also be: It is not possible, because... / only possible provided constraint X.
Still I would like to create an entry point for other people, who run into the same issue in the future.
I posted my own efforts below. Before I spend more hours debugging lots of small (but still solvable) issues, I wanted to find out if someone else faced the same problem and already has a solution. If you have, please clearly document its constraints.
My particular issue, as a reference
I am on Ubuntu 20.04 and podman -v gives 3.4.2.
docker/podman compose
When I use docker-compose up with Podman back-end on docker's WordPress .yml-file, I run into the "duplicate mount destination" issue.
podman-compose is part of Podman 4.1.0, which is not available on Ubuntu as I write this.
Red Hat example
The example of Red Hat gives "Error establishing a database connection ... contact with the database server at mysql could not be established".
A solution for the above does not work for me. share is likely a typo. I tried to replace with unshare.
Cent OS example
I found an example which uses pods instead of a docker-compose.yml file. But it is written for Cent OS.
I modified the Cent OS example, see the script below. I get the containers up and running. However, WordPress is unable to connect to the database.
#!/bin/bash
# Set environment variables:
DB_NAME='wordpress_db'
DB_PASS='mysupersecurepass'
DB_USER='justbeauniqueuser'
POD_NAME='wordpress_with_mariadb'
CONTAINER_NAME_DB='wordpress_db'
CONTAINER_NAME_WP='wordpress'
mkdir -P html
mkdir -P database
# Remove previous attempts
sudo podman pod rm -f $POD_NAME
# Pull before run, bc: invalid reference format eror
sudo podman pull mariadb:latest
sudo podman pull wordpress
# Create a pod instead of --link. So both containers are able to reach each others.
sudo podman pod create -n $POD_NAME -p 80:80
sudo podman run --detach --pod $POD_NAME \
-e MYSQL_ROOT_PASSWORD=$DB_PASS \
-e MYSQL_PASSWORD=$DB_PASS \
-e MYSQL_DATABASE=$DB_NAME \
-e MYSQL_USER=$DB_USER \
--name $CONTAINER_NAME_DB -v "$PWD/database":/var/lib/mysql \
docker.io/mariadb:latest
sudo podman run --detach --pod $POD_NAME \
-e WORDPRESS_DB_HOST=127.0.0.1:3306 \
-e WORDPRESS_DB_NAME=$DB_NAME \
-e WORDPRESS_DB_USER=$DB_USER \
-e WORDPRESS_DB_PASSWORD=$DB_PASS \
--name $CONTAINER_NAME_WP -v "$PWD/html":/var/www/html \
docker.io/wordpress
Also, I was a bit unsure where to post this question. If server fault or another stack exchange are a better fit, I will happily post there.
Actually, your code works with just small changes.
I removed the sudo's and changed the pods external port to 8090, instead of 80. So now everything is running as a non-root user.
#!/bin/bash
# https://stackoverflow.com/questions/74054932/how-to-install-and-setup-wordpress-using-podman
# Set environment variables:
DB_NAME='wordpress_db'
DB_PASS='mysupersecurepass'
DB_USER='justbeauniqueuser'
POD_NAME='wordpress_with_mariadb'
CONTAINER_NAME_DB='wordpress_db'
CONTAINER_NAME_WP='wordpress'
mkdir -p html
mkdir -p database
# Remove previous attempts
podman pod rm -f $POD_NAME
# Pull before run, bc: invalid reference format error
podman pull docker.io/mariadb:latest
podman pull docker.io/wordpress
# Create a pod instead of --link.
# So both containers are able to reach each others.
podman pod create -n $POD_NAME -p 8090:80
podman run --detach --pod $POD_NAME \
-e MYSQL_ROOT_PASSWORD=$DB_PASS \
-e MYSQL_PASSWORD=$DB_PASS \
-e MYSQL_DATABASE=$DB_NAME \
-e MYSQL_USER=$DB_USER \
--name $CONTAINER_NAME_DB -v "$PWD/database":/var/lib/mysql \
docker.io/mariadb:latest
podman run --detach --pod $POD_NAME \
-e WORDPRESS_DB_HOST=127.0.0.1:3306 \
-e WORDPRESS_DB_NAME=$DB_NAME \
-e WORDPRESS_DB_USER=$DB_USER \
-e WORDPRESS_DB_PASSWORD=$DB_PASS \
--name $CONTAINER_NAME_WP -v "$PWD/html":/var/www/html \
docker.io/wordpress
This is what worked for me:
#!/bin/bash
# https://stackoverflow.com/questions/74054932/how-to-install-and-setup-wordpress-using-podman
# Set environment variables:
POD_NAME='wordpress_mariadb'
DB_ROOT_PW='sup3rS3cr3t'
DB_NAME='wp'
DB_PASS='s0m3wh4tS3cr3t'
DB_USER='wordpress'
podman pod create --name $POD_NAME -p 8080:80
podman run \
-d --restart=always --pod=$POD_NAME \
-e MYSQL_ROOT_PASSWORD="$DB_ROOT_PW" \
-e MYSQL_DATABASE="$DB_NAME" \
-e MYSQL_USER="$DB_USER" \
-e MYSQL_PASSWORD="$DB_PASS" \
-v $HOME/public_html/wordpress/mysql:/var/lib/mysql:Z \
--name=wordpress-db docker.io/mariadb:latest
podman run \
-d --restart=always --pod=$POD_NAME \
-e WORDPRESS_DB_NAME="$DB_NAME" \
-e WORDPRESS_DB_USER="$DB_USER" \
-e WORDPRESS_DB_PASSWORD="$DB_PASS" \
-e WORDPRESS_DB_HOST="127.0.0.1" \
-v $HOME/public_html/wordpress/html:/var/www/html:Z \
--name wordpress docker.io/library/wordpress:latest
Please we're facing some issues when installing Varnish 6.0.8 on ubutnu 18.04.6 OS, it doesn't create the secret file inside the /etc/varnish dir as shown below:
enter image description here
we use the following script to for installation :
curl -s https://packagecloud.io/install/repositories/varnishcache/varnish60lts/script.deb.sh | sudo bash
can someone please help ?
PS: we tried to install later versions (6.6 and 7.0.0) and we got the same issue.
Form a security point of view, remote CLI access is not enabled by default. You can see this when looking at /lib/systemd/system/varnish.service:
[Unit]
Description=Varnish Cache, a high-performance HTTP accelerator
After=network-online.target nss-lookup.target
[Service]
Type=forking
KillMode=process
# Maximum number of open files (for ulimit -n)
LimitNOFILE=131072
# Locked shared memory - should suffice to lock the shared memory log
# (varnishd -l argument)
# Default log size is 80MB vsl + 1M vsm + header -> 82MB
# unit is bytes
LimitMEMLOCK=85983232
# Enable this to avoid "fork failed" on reload.
TasksMax=infinity
# Maximum size of the corefile.
LimitCORE=infinity
ExecStart=/usr/sbin/varnishd \
-a :6081 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,256m
ExecReload=/usr/sbin/varnishreload
[Install]
WantedBy=multi-user.target
There are no -T and -S parameters in the standard systemd configuration. However, you can enable this by modifying the systemd configuration yourself.
Just run sudo systemctl edit --full varnish to edit the runtime configuration and add a -T parameter to enable remote CLI access.
Be careful with this and make sure you restrict access to this endpoint via firewalling rules.
Additionally you'll add -S /etc/varnish/secret as a varnishd runtime parameter in /lib/systemd/system/varnish.service.
You can use the following command to add a random unique value to the secret file:
uuidgen | sudo tee /etc/varnish/secret
This is what your runtime parameters would look like:
ExecStart=/usr/sbin/varnishd \
-a :6081 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,2g \
-S /etc/varnish/secret \
-T :6082
When you're done just run the following command to restart Varnish:
sudo systemctl restart varnish
I am trying to run a WordPress app inside of a docker container on Ubuntu VPS using Nginx-Proxy.
First I run the nginx-proxy server using the following command
docker run -d \
-p 80:80 \
-p 443:443 \
--name proxy_server \
--net nginx-proxy-network \
-v /etc/certificates:/etc/nginx/certs \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
jwilder/nginx-proxy
Then I run the mysql database server using the following command
docker run -d \
--name mysql_db \
--net nginx-proxy-network \
-e MYSQL_DATABASE=db1 -e \
MYSQL_USER=db1 -e \
MYSQL_PASSWORD=db1 -e \
MYSQL_ROOT_PASSWORD=db12 \
-v mysql_server_data:/var/lib/mysql \
mysql:latest
I am able to verify that MySql server is running by connecting to it using the following command
root:~# docker exec -it mysql_db /bin/bash
root#dd7643384f76:/# mysql -h localhost -u root -p
mysql> show databases;
Now that nginx-proxy and mysql_db images are running, I want to proxy the WordPress image on the usa.mydomain.com. To do that, I run the following command
docker run -d \
--name wordpress \
--expose 80 \
--net nginx-proxy-network \
-e DEFAULT_HOST=usa.mydomain.com \
-e WORDPRESS_DB_HOST=mysql_db:3306 \
-e WORDPRESS_DB_NAME=db1 \
-e WORDPRESS_DB_USER=db1 \
-e WORDPRESS_DB_PASSWORD=db1 \
-v wordpress:/var/www/html \
wordpress:latest
I can see all 3 container running by executing docker ps -a
However, when I browser http://usa.mydomain.com I get HTTP error 503
503 Service Temporarily Unavailable nginx/1.17.5
I validated that usa.mydomain.com is pointing to the server's IP address by doing the following using the command line my my machine.
ipconfig /flushdns
ping usa.mydomain.com
Even when I try to browse my server's ip address I get the same 503 error.
What could be causing this issue?
I'm trying to create a Dockerfile which runs as non-root user.
When i building this all works fine, but nginx cannot write the log file because it dosen't have enough permissions. Can I, when building a Docker, give root permissions only for nginx?
I'm trying chmod, chown for blocked directories. Doesn't work
FROM php:7.1-fpm-alpine
RUN apk add --no-cache shadow
RUN apk add --no-cache --virtual .ext-deps \
openssl \
unzip \
libjpeg-turbo-dev \
libwebp-dev \
libpng-dev \
freetype-dev \
libmcrypt-dev \
imagemagick-dev \
nodejs-npm \
nginx \
git \
inkscape
# imagick
RUN apk add --update --no-cache autoconf g++ imagemagick-dev libtool make pcre-dev \
&& pecl install imagick \
&& docker-php-ext-enable imagick \
&& apk del autoconf g++ libtool make pcre-dev
# Install Blackfire
RUN version=$(php -r "echo PHP_MAJOR_VERSION.PHP_MINOR_VERSION;") \
&& curl -A "Docker" -o /tmp/blackfire-probe.tar.gz -D - -L -s https://blackfire.io/api/v1/releases/probe/php/linux/amd64/$version \
&& tar zxpf /tmp/blackfire-probe.tar.gz -C /tmp \
&& mv /tmp/blackfire-*.so $(php -r "echo ini_get('extension_dir');")/blackfire.so \
&& printf "extension=blackfire.so\nblackfire.agent_socket=tcp://blackfire:8707\n" > $PHP_INI_DIR/conf.d/blackfire.ini
RUN apk add -y icu-dev \
&& docker-php-ext-configure intl \
&& docker-php-ext-install intl
RUN docker-php-ext-configure pdo_mysql && \
docker-php-ext-configure opcache && \
docker-php-ext-configure exif && \
docker-php-ext-configure pdo && \
docker-php-ext-configure zip && \
docker-php-ext-configure gd \
--with-jpeg-dir=/usr/include --with-png-dir=/usr/include --with-webp-dir=/usr/include --with-freetype-dir=/usr/include && \
docker-php-ext-configure sockets && \
docker-php-ext-configure mcrypt
RUN docker-php-ext-install pdo zip pdo_mysql opcache exif gd sockets mcrypt && \
docker-php-source delete
RUN ln -s /usr/bin/php7 /usr/bin/php && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \
mkdir -p /run/nginx
COPY ./init.sh /
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY ./.env /
RUN chmod +x /init.sh
EXPOSE 80
RUN addgroup -g 1001 node \
&& adduser -u 1001 -G node -s /bin/sh -D node
ARG UID=1001
ARG GID=1001
ENV UID=${UID}
ENV GID=${GID}
RUN usermod -u $UID node \
&& groupmod -g $GID node
RUN chown 1001:1001 /var/lib/nginx -R
RUN mkdir -p /var/tmp/nginx
RUN chown 1001:1001 /var/tmp/nginx -R
USER node
ENTRYPOINT [ "/init.sh" ]
There are quite a few unknowns in your question, for example, the contents of your default.conf file. By default the nginx logs are stored in /var/log/nginx, but I'll assume you're overriding that in the configuration.
The next thing is that the master process of nginx needs to be run as root if you wan't it to be able to bind to system ports (0 - 1023) so in case you are using nginx as a web server and intend to use ports 80 and 443 you should stick with running the nginx process as root.
In case you plan to use other ports and are set on the idea of running the master process as non-root, then you can check this answer for suggestions on how to do that - https://stackoverflow.com/a/42329561/5359953
I am using the term master process a lot here, because nginx spawns worker processes to handle the actual requests and those can be run as a different user (Defined in the nginx configuration file)
I found the solution. I just changed RUN chown 1001:1001 /var/lib/nginx -R to RUN chown -R 1001:1001 /var/. Thats works fine
RUN chown -R 1001:1001 /var/
sometimes it's will be actually bad decision.
u can try add permissions like this
RUN chown -R 1001:1001 /var/tmp/nginx
RUN chown -R 1001:1001 /var/lib/nginx
RUN chown -R 1001:1001 /var/log/nginx
RUN chown -R 1001:1001 /run/nginx
I guess RUN chown 1001:1001 /var/lib/nginx -R work wrong because I set the flag -R too late
This happened on Ubuntu 16.04 in VirtualBox on Windows 10, with docker version 1.12.1, and swagger-ui version 2.2.2.
I was trying to build and run Swagger UI in a docker container, following the instructions on their site:
docker build -t swagger-ui-builder .
docker run -p 127.0.0.1:8080:8080 swagger-ui-builder
The instruction says that now I should be able to view the swagger-ui running, however, when I opened 127.0.0.1:8080 I only got this page back:
<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.8.1</center>
</body>
</html>
This is the content of the Dockerfile:
FROM alpine:3.3
MAINTAINER Roman Tarnavski
RUN apk add --update nginx
COPY nginx.conf /etc/nginx/
ADD ./dist/ /usr/share/nginx/html
EXPOSE 8080
CMD nginx -g 'daemon off;'
I found similar posts on stackoverflow, but none of them helped me solve this problem. What am I doing wrong and how to fix this?
The problem was caused by a permission requirement: the www-data user/group did not have access to website's directory and files.
This problem was explained in the accepted answer to this post: Nginx 403 forbidden for all files
To solve this, the following lines have to be added to the Dockerfile:
RUN set -x ; \
addgroup -g 82 -S www-data ; \
adduser -u 82 -D -S -G www-data www-data && exit 0 ; exit 1
RUN chown -R www-data:www-data /usr/share/nginx/html/*
RUN chmod -R 0755 /usr/share/nginx/html/*
The upper part of the commands are explained in this gist: https://gist.github.com/briceburg/47131d8caf235334b6114954a6e64922
The user/group www-data has to be added first, before the permission can be set for it. The snippet notes that 82 is the standard uid/gid for "www-data" in Alpine
The lower part of the commands is the solution to a similar question in another forum: https://www.digitalocean.com/community/questions/nginx-403-forbidden--2
So the fixed Dockerfile would look like this:
FROM alpine:3.3
MAINTAINER Roman Tarnavski
RUN apk add --update nginx
COPY nginx.conf /etc/nginx/
ADD ./dist/ /usr/share/nginx/html
RUN set -x ; \
addgroup -g 82 -S www-data ; \
adduser -u 82 -D -S -G www-data www-data && exit 0 ; exit 1
RUN chown -R www-data:www-data /usr/share/nginx/html/*
RUN chmod -R 0755 /usr/share/nginx/html/*
EXPOSE 8080
CMD nginx -g 'daemon off;'
Now if I rebuild and rerun swagger-ui-builder, the website shows up correctly.