I'm trying to get Wordpress running with a shared volume for wp-config.php across replicas. I'm developing my manifest on Docker Desktop for Windows on top of the Ubuntu WSL v2. I've enabled the Kubernetes functionality of Docker Desktop, which seems to be working fine with the exception of PersistentVolume resx's. Here are the relevant snippets from my manifest:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0
namespace: yuknis-com
spec:
capacity:
storage: 60Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: local-storage
local:
path: /c/Users/Kirkland/pv0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- "docker-desktop"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: pvc0
name: wordpress-pvc
namespace: yuknis-com
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 60Gi
storageClassName: local-storage
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
namespace: yuknis-com
labels:
app: wordpress
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
volumes:
- name: wordpress
persistentVolumeClaim:
claimName: wordpress-pvc
initContainers:
- name: volume-permissions
image: busybox
command: ['sh', '-c', 'chmod -R g+rwX /bitnami']
volumeMounts:
- mountPath: /bitnami
name: wordpress
containers:
- name: wordpress
image: yuknis/wordpress-nginx-phpredis:latest
envFrom:
- configMapRef:
name: wordpress
volumeMounts:
- name: wordpress
mountPath: /bitnami
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
When I try to run my application on MacOS, it works fine with the above. However when I try to run it on Windows, it fails on the initContainer portion with an error of:
chmod: /bitnami: Operation not permitted
chmod: /bitnami: Operation not permitted
Why might this work on MacOS, but not on Windows on top of the WSL? Any ideas?
There is a known issue. Docker Desktop has its own WSL distribution, so you can't access it from the same root.
Workaround for this issue is to change path in your PV:
spec:
capacity:
storage: 60Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: hostpath
local:
path: /run/desktop/mnt/host/c/Users/Kirkland/pv0
Check the github post I linked for considerations using this method.
Related
I have the following deployment and service config files for deploying a service to Kubernetes;
apiVersion: apps/v1
kind: Deployment
metadata:
name: write-your-own-name
spec:
replicas: 1
selector:
matchLabels:
run: write-your-own-name
template:
metadata:
labels:
run: write-your-own-name
spec:
containers:
- name: write-your-own-name-container
image: gcr.io/cool-adviser-345716/automl:manual__2022-10-10T07_54_04_00_00
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: write-your-own-name
spec:
ports:
- port: 80
targetPort: 80
selector:
run: write-your-own-name
type: LoadBalancer
K8s exposes the service on the endpoint (let's say) http://12.239.21.88
I then created a namespace nginx-deploy and installed nginx-controller using the command
helm install controller-free nginx-stable/nginx-ingress \
--set controller.ingressClass=nginx-deployment \
--set controller.service.type=NodePort \
--set controller.service.httpPort.nodePort=30040 \
--set controller.enablePreviewPolicies=true \
--namespace nginx-deploy
I then added a rate limit policy
apiVersion: k8s.nginx.org/v1
kind: Policy
metadata:
name: rate-limit-policy
spec:
rateLimit:
rate: 1r/s
key: ${binary_remote_addr}
zoneSize: 10M
And then finally a VirtualServer
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: write-your-name-vs
spec:
ingressClassName: nginx-deployment
host: 12.239.21.88
policies:
- name: rate-limit-policy
upstreams:
- name: write-your-own-name
service: write-your-own-name
port: 80
routes:
- path: /
action:
pass: write-your-own-name
- path: /hehe
action:
redirect:
url: http://www.nginx.com
code: 301
Before adding the VirtualServer, I could go to 12.239.21.88:80 and access my service and I can still do that after adding the virtual server. But when I try accessing the page 12.239.21.88:80/hehe, I get detail not found error.
I am guessing that this is because the VirtualServer is not working on top of the service. How do I expose my service with a VirtualServer? Or alternatively, I want rate limiting on my service and how do I achieve this?
I used the following tutorial to get rate-limiting to work:
NGINX Tutorial: Protect Kubernetes APIs with Rate Limiting
I am sorry if the question is too long but I have been trying to figure out rate limiting for a while and can't get it to work. Thanks in advance.
i wanted to make wordpress with kubernetes, but wordpress cant use host from mariadb-service. This is my script
---
apiVersion: v1
kind: Service
metadata:
name: db-wordpress
labels:
app: mariadb-database
spec:
selector:
app: mariadb-database
ports:
- port: 3306
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mariadb-database
spec:
selector:
matchLabels:
app: mariadb-database
template:
metadata:
labels:
app: mariadb-database
spec:
containers:
- name: mariadb-database
image: darywinata/mariadb:1.0
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: database-secret
key: password
- name: MYSQL_USER
value: blibli
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: database-secret
key: password
- name: MYSQL_DATABASE
value: wpdb
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: servicetype
operator: In
values:
- database-mariadb
im already fight this error for 1 week, can somebody help me with this?
note: inside docker container port 3306 is not listening, idk this is wrong or not.
Hi there and welcome to stackoverflow.
There are two issues with your setup. First of all, I have tried running your mysql docker image locally and compared to the official mysql image it is not listening to any port. Without the mysql process listening to any port, you will not be able to connect to it.
Also, you might want to consider a standard internal service type instead of one with clusterIP: None which is called a headless service and usually used for statefulsets and not deployments. more information can be found on the official documentation
So in order to connect from your application to your pod:
Fix problem with your custom mysql image so it actually listens on port 3306 (or whatever you have configured in your image)
I have a deployment of wordpress with a custom docker image with a custom theme copied into the image that gets deployed into a kubernetes cluster with a persistent volume.
The initial deployment works great and the website presents as expected. The problem comes when I update the theme and deploy a new docker image, because of the persistent volume the theme files don't seem to be updated to the new version of the theme in the docker image.
Is there a to clear/reset the wp-content/themes/my-theme directory when I deploy the new image?
Any help is appreciated and code samples below.
Dockerfile:
FROM wordpress:latest
COPY ./my-thtme /usr/src/wordpress/wp-content/themes/my-theme
Persistent Volume Claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wordpress
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClass: "nfs"
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
imagePullSecrets:
- name: gitlab-auth
containers:
- name: wordpress
image: registry.gitlab.com/user/wordpress:1234
imagePullPolicy: IfNotPresent
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_USER
value: mysql_wordpress
- name: WORDPRESS_DB_NAME
value: wordpress
- name: WORDPRESS_DB_TABLE_PREFIX
value: _wp
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: mysql-password
volumeMounts:
- name: wordpress-data
mountPath: /var/www/html/wp-content
ports:
- name: http
containerPort: 80
protocol: TCP
volumes:
- name: wordpress-data
persistentVolumeClaim:
claimName: wordpress
There is more than one way to do it. You could add an initContainer to the Deployment spec to remove existing files from the Persistent Volume before app containers in the Pod are started. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
imagePullSecrets:
- name: gitlab-auth
initContainers:
- name: init-theme
image: "alpine:3"
command: ["sh", "-c", "if [ -d /var/www/html/wp-content/my-theme ]; then rm -rf /var/www/html/wp-content/my-theme; fi"]
volumeMounts:
- name: workdpress-data
mountPath: /var/www/html/wp-content
containers:
...
volumes:
- name: wordpress-data
persistentVolumeClaim:
claimName: wordpress
I am testing out something with the PV and wanted to get some clarification. We have an 18 node cluster(using Docker EE) and we have mounted NFS share on each of this node to be used for the k8s persistent storage. I created a PV (using hostPath) to bind it with my nginx deployment(mounting the /usr/share/nginx/html to PV).
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-test-namespace-pv
namespace: test-namespace
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/nfs_share/docker/mynginx/demo"
How to create the PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-test-namespace-pvc
namespace: test-namespace
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
Deployment File:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mynginx
specs:
selector:
matchLabels:
run: mynginx-apps
replicas:2
template:
metadata:
labels:
run: mynginx-apps
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: nfs-test-namespace-pvc
containers:
- name: mynginx
image: dtr.midev.spglobal.com/spgmi/base:mynginx-v1
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
So i assume when my pod starts the default index.html file from the nginx image should be available at the /usr/share/nginx/html within my pod and it should also be copied/available at my /nfs_share/mynginx/demo.
However i am not seeing any file here and when i expose this deployment and access the service it gives me 403 error as the index file is not available. Now when i create an html file either from inside the pod or from the node on the nfs share mounted as PV, it works as expected.
Is my assumption of the default file getting copied to hostpath correct? or am i missing something?
Your /nfs_share/docker/mynginx/demo will not be available in pod, explanation is available here:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
The configuration file specifies that the volume is at /mnt/data on the cluster’s Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the volume can be mounted as read-write by a single Node. It defines the StorageClass name manual for the PersistentVolume, which will be used to bind PersistentVolumeClaim requests to this PersistentVolume.
You do not see PV on your pod, it's being used to utilize as PVC which then can be mounted inside a pod.
You can read the whole article Configure a Pod to Use a PersistentVolume for Storage which should answer all the questions.
the "/mnt/data" directory should be created on the node which your pod running actually.
I am deploying java application at google kubernetes engine. Application correctly starts but fails when trying to request data. Exception is "DatastoreException, Missing or insufficient permissions". I created service account with "Owner" role and provided service account key to kubernetes. Here is how i apply kubernetes deployment:
# delete old secret
kubectl delete secret google-key --ignore-not-found
# file with key
kubectl create secret generic google-key --from-file=key.json
kubectl apply -f prod-kubernetes.yml
Here is deployment config:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: user
labels:
app: user
spec:
type: NodePort
ports:
- port: 8000
name: user
targetPort: 8000
nodePort: 32756
selector:
app: user
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: userdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: user
spec:
volumes:
- name: google-cloud-key
secret:
secretName: google-key
containers:
- name: usercontainer
image: gcr.io/proj/user:v1
imagePullPolicy: Always
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 8000
I wonder why it is not working? I have used this config in previous deployment and had success.
UPD: I made sure that /var/secrets/google/key.json exist at pod. I print Files.exists(System.getEnv("GOOGLE_APPLICATION_CREDENTIALS")) to log. I also print content of this file - it seems not corrupted.
Solved, reason was incorrect evn name GOOGLE_CLOUD_PROJECT