kubernetes persistent volume for nginx not showing the default index.html file - nginx

I am testing out something with the PV and wanted to get some clarification. We have an 18 node cluster(using Docker EE) and we have mounted NFS share on each of this node to be used for the k8s persistent storage. I created a PV (using hostPath) to bind it with my nginx deployment(mounting the /usr/share/nginx/html to PV).
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-test-namespace-pv
namespace: test-namespace
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/nfs_share/docker/mynginx/demo"
How to create the PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-test-namespace-pvc
namespace: test-namespace
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
Deployment File:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mynginx
specs:
selector:
matchLabels:
run: mynginx-apps
replicas:2
template:
metadata:
labels:
run: mynginx-apps
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: nfs-test-namespace-pvc
containers:
- name: mynginx
image: dtr.midev.spglobal.com/spgmi/base:mynginx-v1
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
So i assume when my pod starts the default index.html file from the nginx image should be available at the /usr/share/nginx/html within my pod and it should also be copied/available at my /nfs_share/mynginx/demo.
However i am not seeing any file here and when i expose this deployment and access the service it gives me 403 error as the index file is not available. Now when i create an html file either from inside the pod or from the node on the nfs share mounted as PV, it works as expected.
Is my assumption of the default file getting copied to hostpath correct? or am i missing something?

Your /nfs_share/docker/mynginx/demo will not be available in pod, explanation is available here:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
The configuration file specifies that the volume is at /mnt/data on the cluster’s Node. The configuration also specifies a size of 10 gibibytes and an access mode of ReadWriteOnce, which means the volume can be mounted as read-write by a single Node. It defines the StorageClass name manual for the PersistentVolume, which will be used to bind PersistentVolumeClaim requests to this PersistentVolume.
You do not see PV on your pod, it's being used to utilize as PVC which then can be mounted inside a pod.
You can read the whole article Configure a Pod to Use a PersistentVolume for Storage which should answer all the questions.

the "/mnt/data" directory should be created on the node which your pod running actually.

Related

Airflow helm chart 1.7 - Mounting DAGs from an externally populated PVC and non-default DAG path

I want to use Airflow in Kubernetes on my local machine.
From the Airflow helm chart doc I should use a PVC to use my local DAG files, so I setup my PV and PVC like so:
apiVersion: v1
kind: PersistentVolume
metadata:
name: dags-pv
spec:
volumeMode: Filesystem
storageClassName: local-path
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
path: /mnt/c/Users/me/dags
type: Directory
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dags-pvc
spec:
storageClassName: local-path
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
Then I create a override-values.yaml file
config:
core:
dags_folder: "/usr/somepath/airflow/dags"
dags:
persistence:
enabled: true
existingClaim: "dags-pvc"
gitSync:
enabled: false
Note here that I want to change the default DAG folder path. And that's where I am having difficulties (it works if I keep the default DAG folder path). I don't know how to create a mount point and attach the PVC to it.
I tried to add, in my override file:
worker:
extraVolumeMounts:
- name: w-dags
mountPath: "/usr/somepath/airflow/dags"
extraVolumes:
- name: w-dags
persistentVolumeClaim:
claimName: "dags-pvc"
scheduler:
extraVolumeMounts:
- name: s-dags
mountPath: "/usr/somepath/airflow/dags"
extraVolumes:
- name: s-dags
persistentVolumeClaim:
claimName: "dags-pvc"
But that doesn't work, my scheduler is stuck on Init:0/1: "Unable to attach or mount volumes: unmounted volumes=[dags], unattached volumes=[logs dags s-dags config kube-api-access-9mc4c]: timed out waiting for the condition". So, I can tell I broke a condition - dags should be mounted (aka my extraVolumes section is wrong) - but I am not sure where to go from here.

Custom ngninx.conf can be applied through ConfigMap to Nginx pod but can we achieve through Secrets in Kubernetes for more security

I was able to accomplish the setup of ConfigMap for custom nginx.conf and mount to the Nginx pod and this works well.
My requirement is to make the credentials inside nginx.conf to be more secure and achieve through the usage of Secret.
I have tried with encoding(base 64) the nginx.conf file and applied on secret yaml file but applying deployment file throws an error.
Kindly, guide with some insights if this could be achieved with the Secrect usage as the issue lies with the secret data portion.
Secret file:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: nginx-secret
data:
nginx.conf: |
*************************************************
Below shows the error while running the nginx deployment file:
error validating data: ValidationError(Deployment.spec.template.spec.volumes[0].secret): unknown field "name" in io.k8s.api.core.v1.SecretVolumeSource; if you choose to ignore these errors, turn validation off with --validate=false
Secrets and ConfigMaps are essentially identical. Both can be mounted as volumes in your pods; if you want to use a Secret instead of ConfigMap, replace:
volumes:
- name: nginx-config
configMap:
name: nginx-config
With:
volumes:
- name: nginx-config
secret:
secretName: nginx-secret
But note that this does not get you any additional security! Values in a Secret are stored in plaintext and can be read by anyone with the necessary permissions to read the secret, just like a ConfigMap.
A complete Deployment might look like:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: docker.io/nginx:mainline
imagePullPolicy: Always
name: nginx
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: nginx-config
volumes:
- name: nginx-config
secret:
secretName: nginx-secret
This will mount keys from the secret (such as default.conf) in /etc/nginx/conf.d. The contents will not be base64 encoded.

Docker Desktop Kubernetes Windows PV Non Root Container

I'm trying to get Wordpress running with a shared volume for wp-config.php across replicas. I'm developing my manifest on Docker Desktop for Windows on top of the Ubuntu WSL v2. I've enabled the Kubernetes functionality of Docker Desktop, which seems to be working fine with the exception of PersistentVolume resx's. Here are the relevant snippets from my manifest:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0
namespace: yuknis-com
spec:
capacity:
storage: 60Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: local-storage
local:
path: /c/Users/Kirkland/pv0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- "docker-desktop"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: pvc0
name: wordpress-pvc
namespace: yuknis-com
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 60Gi
storageClassName: local-storage
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
namespace: yuknis-com
labels:
app: wordpress
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
volumes:
- name: wordpress
persistentVolumeClaim:
claimName: wordpress-pvc
initContainers:
- name: volume-permissions
image: busybox
command: ['sh', '-c', 'chmod -R g+rwX /bitnami']
volumeMounts:
- mountPath: /bitnami
name: wordpress
containers:
- name: wordpress
image: yuknis/wordpress-nginx-phpredis:latest
envFrom:
- configMapRef:
name: wordpress
volumeMounts:
- name: wordpress
mountPath: /bitnami
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
When I try to run my application on MacOS, it works fine with the above. However when I try to run it on Windows, it fails on the initContainer portion with an error of:
chmod: /bitnami: Operation not permitted
chmod: /bitnami: Operation not permitted
Why might this work on MacOS, but not on Windows on top of the WSL? Any ideas?
There is a known issue. Docker Desktop has its own WSL distribution, so you can't access it from the same root.
Workaround for this issue is to change path in your PV:
spec:
capacity:
storage: 60Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: hostpath
local:
path: /run/desktop/mnt/host/c/Users/Kirkland/pv0
Check the github post I linked for considerations using this method.

kubernetes application throws DatastoreException, Missing or insufficient permissions. Service key file provided

I am deploying java application at google kubernetes engine. Application correctly starts but fails when trying to request data. Exception is "DatastoreException, Missing or insufficient permissions". I created service account with "Owner" role and provided service account key to kubernetes. Here is how i apply kubernetes deployment:
# delete old secret
kubectl delete secret google-key --ignore-not-found
# file with key
kubectl create secret generic google-key --from-file=key.json
kubectl apply -f prod-kubernetes.yml
Here is deployment config:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: user
labels:
app: user
spec:
type: NodePort
ports:
- port: 8000
name: user
targetPort: 8000
nodePort: 32756
selector:
app: user
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: userdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: user
spec:
volumes:
- name: google-cloud-key
secret:
secretName: google-key
containers:
- name: usercontainer
image: gcr.io/proj/user:v1
imagePullPolicy: Always
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
ports:
- containerPort: 8000
I wonder why it is not working? I have used this config in previous deployment and had success.
UPD: I made sure that /var/secrets/google/key.json exist at pod. I print Files.exists(System.getEnv("GOOGLE_APPLICATION_CREDENTIALS")) to log. I also print content of this file - it seems not corrupted.
Solved, reason was incorrect evn name GOOGLE_CLOUD_PROJECT

Kubernetes persistent storage causing files to disappear inside container

Trying to create a Drupal container on Kubernetes with the apache drupal image.
When a persistent volume is mounted at /var/www/html and inspecting the Drupal container with docker exec -it <drupal-container-name> bash there are no files visible. Thus no files can be served.
Workflow
1 - Create google compute disk
gcloud compute disks create --size=20GB --zone=us-central1-c drupal-1
2 - Register the newly created google compute disk to the kubernetes cluster instance
kubectl create -f gce-volumes.yaml
3 - Create Drupal pod
kubectl create -f drupal-deployment.yaml
The definition files are inspired from the wordpress example, my drupal-deployment.yaml:
apiVersion: v1
kind: Service
metadata:
name: drupal
labels:
app: drupal
spec:
ports:
- port: 80
selector:
app: drupal
tier: frontend
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dp-pv-claim
labels:
app: drupal
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: drupal
labels:
app: drupal
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: drupal
tier: frontend
spec:
containers:
- name: drupal
image: drupal:8.2.3-apache
ports:
- name: drupal
containerPort: 80
volumeMounts:
- name: drupal-persistent-storage
mountPath: /var/www/html
volumes:
- name: drupal-persistent-storage
persistentVolumeClaim:
claimName: dp-pv-claim
And the gce-volumes.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: drupal-pv-1
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: drupal-1
fsType: ext4
What's causing the files to disappear? How can I successfully persist the Drupal installation files at /var/www/html inside the container?
You are mounting the persistentStorage at /var/www/html, so if you had files at that location in your base image, the folder is replaced by the mount, and the files from the base image are no longer available.
If your content will be dynamic and you want to save those files in the persistentStorage, this is the way to do it, however you will need to populate the persistentStorage initially.
One solution would be to have the files in a different folder in the base image, and copy them over when you run the Pod, however this will happen every time you start the Pod, and may overwrite your changes, unless your copy script checks first if the folder is empty.
Another option is to have a Job do this only once (before you run your Drupal Pod and mount this storage.)
Note:
GKEPersistentStorage only allows ReadWriteOnce (which can be mounted only on a single Pod) or ReadOnlyMany (i.e. readable only but can be mounted on many Pods) and it cannot be mounted with different modes at the same time, so in the end you can only run one of these Pods. (i.e. it will not scale)

Resources