Clear Kubernetes persistent volume on new image - wordpress

I have a deployment of wordpress with a custom docker image with a custom theme copied into the image that gets deployed into a kubernetes cluster with a persistent volume.
The initial deployment works great and the website presents as expected. The problem comes when I update the theme and deploy a new docker image, because of the persistent volume the theme files don't seem to be updated to the new version of the theme in the docker image.
Is there a to clear/reset the wp-content/themes/my-theme directory when I deploy the new image?
Any help is appreciated and code samples below.
Dockerfile:
FROM wordpress:latest
COPY ./my-thtme /usr/src/wordpress/wp-content/themes/my-theme
Persistent Volume Claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wordpress
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClass: "nfs"
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
imagePullSecrets:
- name: gitlab-auth
containers:
- name: wordpress
image: registry.gitlab.com/user/wordpress:1234
imagePullPolicy: IfNotPresent
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_USER
value: mysql_wordpress
- name: WORDPRESS_DB_NAME
value: wordpress
- name: WORDPRESS_DB_TABLE_PREFIX
value: _wp
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: mysql-password
volumeMounts:
- name: wordpress-data
mountPath: /var/www/html/wp-content
ports:
- name: http
containerPort: 80
protocol: TCP
volumes:
- name: wordpress-data
persistentVolumeClaim:
claimName: wordpress

There is more than one way to do it. You could add an initContainer to the Deployment spec to remove existing files from the Persistent Volume before app containers in the Pod are started. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
imagePullSecrets:
- name: gitlab-auth
initContainers:
- name: init-theme
image: "alpine:3"
command: ["sh", "-c", "if [ -d /var/www/html/wp-content/my-theme ]; then rm -rf /var/www/html/wp-content/my-theme; fi"]
volumeMounts:
- name: workdpress-data
mountPath: /var/www/html/wp-content
containers:
...
volumes:
- name: wordpress-data
persistentVolumeClaim:
claimName: wordpress

Related

How to add a VirtualServer on top of a service in Kubernetes?

I have the following deployment and service config files for deploying a service to Kubernetes;
apiVersion: apps/v1
kind: Deployment
metadata:
name: write-your-own-name
spec:
replicas: 1
selector:
matchLabels:
run: write-your-own-name
template:
metadata:
labels:
run: write-your-own-name
spec:
containers:
- name: write-your-own-name-container
image: gcr.io/cool-adviser-345716/automl:manual__2022-10-10T07_54_04_00_00
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: write-your-own-name
spec:
ports:
- port: 80
targetPort: 80
selector:
run: write-your-own-name
type: LoadBalancer
K8s exposes the service on the endpoint (let's say) http://12.239.21.88
I then created a namespace nginx-deploy and installed nginx-controller using the command
helm install controller-free nginx-stable/nginx-ingress \
--set controller.ingressClass=nginx-deployment \
--set controller.service.type=NodePort \
--set controller.service.httpPort.nodePort=30040 \
--set controller.enablePreviewPolicies=true \
--namespace nginx-deploy
I then added a rate limit policy
apiVersion: k8s.nginx.org/v1
kind: Policy
metadata:
name: rate-limit-policy
spec:
rateLimit:
rate: 1r/s
key: ${binary_remote_addr}
zoneSize: 10M
And then finally a VirtualServer
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: write-your-name-vs
spec:
ingressClassName: nginx-deployment
host: 12.239.21.88
policies:
- name: rate-limit-policy
upstreams:
- name: write-your-own-name
service: write-your-own-name
port: 80
routes:
- path: /
action:
pass: write-your-own-name
- path: /hehe
action:
redirect:
url: http://www.nginx.com
code: 301
Before adding the VirtualServer, I could go to 12.239.21.88:80 and access my service and I can still do that after adding the virtual server. But when I try accessing the page 12.239.21.88:80/hehe, I get detail not found error.
I am guessing that this is because the VirtualServer is not working on top of the service. How do I expose my service with a VirtualServer? Or alternatively, I want rate limiting on my service and how do I achieve this?
I used the following tutorial to get rate-limiting to work:
NGINX Tutorial: Protect Kubernetes APIs with Rate Limiting
I am sorry if the question is too long but I have been trying to figure out rate limiting for a while and can't get it to work. Thanks in advance.

Docker Desktop Kubernetes Windows PV Non Root Container

I'm trying to get Wordpress running with a shared volume for wp-config.php across replicas. I'm developing my manifest on Docker Desktop for Windows on top of the Ubuntu WSL v2. I've enabled the Kubernetes functionality of Docker Desktop, which seems to be working fine with the exception of PersistentVolume resx's. Here are the relevant snippets from my manifest:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0
namespace: yuknis-com
spec:
capacity:
storage: 60Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: local-storage
local:
path: /c/Users/Kirkland/pv0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- "docker-desktop"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: pvc0
name: wordpress-pvc
namespace: yuknis-com
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 60Gi
storageClassName: local-storage
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
namespace: yuknis-com
labels:
app: wordpress
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
volumes:
- name: wordpress
persistentVolumeClaim:
claimName: wordpress-pvc
initContainers:
- name: volume-permissions
image: busybox
command: ['sh', '-c', 'chmod -R g+rwX /bitnami']
volumeMounts:
- mountPath: /bitnami
name: wordpress
containers:
- name: wordpress
image: yuknis/wordpress-nginx-phpredis:latest
envFrom:
- configMapRef:
name: wordpress
volumeMounts:
- name: wordpress
mountPath: /bitnami
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
When I try to run my application on MacOS, it works fine with the above. However when I try to run it on Windows, it fails on the initContainer portion with an error of:
chmod: /bitnami: Operation not permitted
chmod: /bitnami: Operation not permitted
Why might this work on MacOS, but not on Windows on top of the WSL? Any ideas?
There is a known issue. Docker Desktop has its own WSL distribution, so you can't access it from the same root.
Workaround for this issue is to change path in your PV:
spec:
capacity:
storage: 60Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: hostpath
local:
path: /run/desktop/mnt/host/c/Users/Kirkland/pv0
Check the github post I linked for considerations using this method.

Unable to apply Nginx ingress controller

I have 3 VMS running in localsystemeach 1 Master, 2 Nodes. I have installed weave CNI Network. I am trying to install the Nginx ingress controller with
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml
But im unable to create it. I have tried the same with the AWS Ec2 instances. It is always crashing
I have seen the describe
getting this error in admission-create pod MountVolume.SetUp failed for volume "kube-api-access-kdhpc" : object "ingress-nginx"/"kube-root-ca.crt" not registered
and the admission-patch,controller pod is keep on restarting
the controller pod output
Im kinda struck over here. I have tried using the flannel cni too and the result is the same.
Any suggestions are appreciated.
Try these steps, these configurations worked with me well.
Ingress Controller
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-controller
namespace: ingress-space
spec:
replicas: 1
selector:
matchLabels:
name: nginx-ingress
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
name: nginx-ingress
spec:
containers:
- args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --default-backend-service=app-space/default-http-backend
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
imagePullPolicy: IfNotPresent
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
restartPolicy: Always
serviceAccount: ingress-serviceaccount
serviceAccountName: ingress-serviceaccount
terminationGracePeriodSeconds: 30
Ingress Service
Apply this NodePort or LoadBalancer as per your configurations:
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress-space
spec:
ports:
- name: http
nodePort: 30080
port: 80
protocol: TCP
targetPort: 80
- name: https
nodePort: 31640
port: 443
protocol: TCP
targetPort: 443
selector:
name: nginx-ingress
type: NodePort
Role for Ingress
You will need to create a service account for the ingress, any name of your choice, apply these rbac Cluster Role and Cluster Role Binding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ingress-role
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- update
- create
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- list
- apiGroups:
- networking.k8s.io
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- apiGroups:
- networking.k8s.io
- extensions
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- k8s.nginx.org
resources:
- virtualservers
- virtualserverroutes
- globalconfigurations
- transportservers
- policies
verbs:
- list
- watch
- get
- apiGroups:
- k8s.nginx.org
resources:
- virtualservers/status
- virtualserverroutes/status
- policies/status
- transportservers/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-role
subjects:
- kind: ServiceAccount
name: ingress-serviceaccount
namespace: ingress-space
Your ingress source is ready to install, refer to https://kubernetes.io/docs/concepts/services-networking/ingress/ and apply the ingress resource
I think you should use this one ( for baremetal )
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.31.1/deploy/static/provider/baremetal/deploy.yaml
follow this article :
Nginx Ingress Controller - Failed Calling Webhook

Kubernetes Mariadb service cannot be accessed

i wanted to make wordpress with kubernetes, but wordpress cant use host from mariadb-service. This is my script
---
apiVersion: v1
kind: Service
metadata:
name: db-wordpress
labels:
app: mariadb-database
spec:
selector:
app: mariadb-database
ports:
- port: 3306
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mariadb-database
spec:
selector:
matchLabels:
app: mariadb-database
template:
metadata:
labels:
app: mariadb-database
spec:
containers:
- name: mariadb-database
image: darywinata/mariadb:1.0
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: database-secret
key: password
- name: MYSQL_USER
value: blibli
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: database-secret
key: password
- name: MYSQL_DATABASE
value: wpdb
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: servicetype
operator: In
values:
- database-mariadb
im already fight this error for 1 week, can somebody help me with this?
note: inside docker container port 3306 is not listening, idk this is wrong or not.
Hi there and welcome to stackoverflow.
There are two issues with your setup. First of all, I have tried running your mysql docker image locally and compared to the official mysql image it is not listening to any port. Without the mysql process listening to any port, you will not be able to connect to it.
Also, you might want to consider a standard internal service type instead of one with clusterIP: None which is called a headless service and usually used for statefulsets and not deployments. more information can be found on the official documentation
So in order to connect from your application to your pod:
Fix problem with your custom mysql image so it actually listens on port 3306 (or whatever you have configured in your image)

kubernetes persistent volume for nginx not showing the default index.html file

I am testing out something with the PV and wanted to get some clarification. We have an 18 node cluster(using Docker EE) and we have mounted NFS share on each of this node to be used for the k8s persistent storage. I created a PV (using hostPath) to bind it with my nginx deployment(mounting the /usr/share/nginx/html to PV).
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-test-namespace-pv
namespace: test-namespace
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/nfs_share/docker/mynginx/demo"
How to create the PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-test-namespace-pvc
namespace: test-namespace
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
Deployment File:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mynginx
specs:
selector:
matchLabels:
run: mynginx-apps
replicas:2
template:
metadata:
labels:
run: mynginx-apps
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: nfs-test-namespace-pvc
containers:
- name: mynginx
image: dtr.midev.spglobal.com/spgmi/base:mynginx-v1
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
So i assume when my pod starts the default index.html file from the nginx image should be available at the /usr/share/nginx/html within my pod and it should also be copied/available at my /nfs_share/mynginx/demo.
However i am not seeing any file here and when i expose this deployment and access the service it gives me 403 error as the index file is not available. Now when i create an html file either from inside the pod or from the node on the nfs share mounted as PV, it works as expected.
Is my assumption of the default file getting copied to hostpath correct? or am i missing something?
Your /nfs_share/docker/mynginx/demo will not be available in pod, explanation is available here:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
The configuration file specifies that the volume is at /mnt/data on the cluster’s Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the volume can be mounted as read-write by a single Node. It defines the StorageClass name manual for the PersistentVolume, which will be used to bind PersistentVolumeClaim requests to this PersistentVolume.
You do not see PV on your pod, it's being used to utilize as PVC which then can be mounted inside a pod.
You can read the whole article Configure a Pod to Use a PersistentVolume for Storage which should answer all the questions.
the "/mnt/data" directory should be created on the node which your pod running actually.

Resources