Change index.html nginx kubernetes deployment - nginx

I have this deployment:
I was able to edit the index page at one of my pods, but how can I commit it to the deployment image? So when I scale the application all new pods will have the same image with index edited.

It's worked for me
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html/index.html
name: nginx-conf
subPath: index.html
volumes:
- name: nginx-conf
configMap:
name: nginx-index-html-configmap
And simple configmap with:
data:
<html></html>

You will have to create a new image with the updated index.html, and then use this new image on your deployments.
If you want index.html to be easily modifiable, then
Create a new image without the index.html file
Store the contents of index.html in a configMap
Volume mount the configMap (as explained here) onto the path where you want index.html to be mounted
Then, whenever you want to update the index.html, you just have to update the ConfigMap and wait for a few minutes. Kubernetes will take care of syncing the updated index.html.

Following this answer and this readme
One can create a configMap with the following command
kubectl create configmap nginx-index-html-configmap --from-file=index.html -o yaml --dry-run
And then add this cm as a volumeMount in k8s deployment object.

Use init container for any preprocessing or as stated above change docker image accordingly before using it.

Related

Custom ngninx.conf can be applied through ConfigMap to Nginx pod but can we achieve through Secrets in Kubernetes for more security

I was able to accomplish the setup of ConfigMap for custom nginx.conf and mount to the Nginx pod and this works well.
My requirement is to make the credentials inside nginx.conf to be more secure and achieve through the usage of Secret.
I have tried with encoding(base 64) the nginx.conf file and applied on secret yaml file but applying deployment file throws an error.
Kindly, guide with some insights if this could be achieved with the Secrect usage as the issue lies with the secret data portion.
Secret file:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: nginx-secret
data:
nginx.conf: |
*************************************************
Below shows the error while running the nginx deployment file:
error validating data: ValidationError(Deployment.spec.template.spec.volumes[0].secret): unknown field "name" in io.k8s.api.core.v1.SecretVolumeSource; if you choose to ignore these errors, turn validation off with --validate=false
Secrets and ConfigMaps are essentially identical. Both can be mounted as volumes in your pods; if you want to use a Secret instead of ConfigMap, replace:
volumes:
- name: nginx-config
configMap:
name: nginx-config
With:
volumes:
- name: nginx-config
secret:
secretName: nginx-secret
But note that this does not get you any additional security! Values in a Secret are stored in plaintext and can be read by anyone with the necessary permissions to read the secret, just like a ConfigMap.
A complete Deployment might look like:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: docker.io/nginx:mainline
imagePullPolicy: Always
name: nginx
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: nginx-config
volumes:
- name: nginx-config
secret:
secretName: nginx-secret
This will mount keys from the secret (such as default.conf) in /etc/nginx/conf.d. The contents will not be base64 encoded.

How to use a config map to create a single file within container?

I'm trying to configure Nginx container on Openshift. My final goal is to overwrite Nginx config file. I know it's possible using config maps. Because any failure with modifying Nginx config directories is crashing the container, temporarily my goal is to just create an index.html file in /opt/app-root/src directory.
I'm facing two problems depends on the configuration
config map overwrite whole /opt/app-root/src directory
config map creates index.html directory with index file inside
Config map:
apiVersion: v1
data:
index: |-
<html>
<body>
yo yo!
</body>
</html>
kind: ConfigMap
metadata:
creationTimestamp: '2020-01-16T12:53:25Z'
name: index-for-nginx
namespace: some-namespace
Deployment config (part, related to the topic):
spec:
containers:
- image: someimage
...
volumeMounts:
- mountPath: /opt/app-root/src/index.html
name: index
volumes:
- configMap:
defaultMode: 420
name: index-for-nginx
name: index
When:
volumeMounts:
- mountPath: /opt/app-root/src/index.html
It creates index.html directory and index file (with proper content) in /opt/app-root/src
When:
volumeMounts:
- mountPath: /opt/app-root/src/
It overwrites /opt/app-root/src directory
My question is - how should I configure it to create index.html file in /opt/app-root/src without overwriting the directory?
You can use subPath to mount the single file you want.
volumeMounts:
- mountPath: /opt/app-root/src/index.html
subPath: index
name: index
https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath
You can do the following, taken from this GitHub issue
containers:
- volumeMounts:
- name: config-volumes
mountPath: /opt/app-root/src/index.html
subPath: index
volumes:
- name: config-volumes
configMap:
name: index-for-nginx
Note: A container using a ConfigMap as a subPath volume will not receive ConfigMap updates.

Error when access to Nextcloud in Kubernetes

My goal is :
create a pod with Nextcloud
create a service to access this pod
from another machine with nginx route a CNAME to the service
I tried to deploy a pod with Nextcloud and a service to access it but actually I can't access it. I have an error :
message ERR_SSL_PROTOCOL_ERROR.
I just followed a tutorial at the beginning but I didn't want to use nginx like it was explained because I have it on another machine.
When I look at pods (nextcloud + db) and services they look ok but I have no response when I try to access nextcloud.
(nc = nextcloud)
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nc
name: nc
spec:
replicas: 1
selector:
matchLabels:
app: nc
strategy:
type: Recreate
template:
metadata:
labels:
app: nc
spec:
containers:
- env:
- name: DEBUG
value: "false"
- name: NEXTCLOUD_URL
value: http://test.fr
- name: NEXTCLOUD_ADMIN_USER
value: admin
- name: NEXTCLOUD_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: NEXTCLOUD_ADMIN_PASSWORD
- name: NEXTCLOUD_UPLOAD_MAX_FILESIZE
value: 4G
- name: NEXTCLOUD_MAX_FILE_UPLOADS
value: "20"
- name: MYSQL_DATABASE
value: nextcloud
- name: MYSQL_HOST
value: mariadb
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_USER
value: nextcloud
name: nc
image: nextcloud
ports:
- containerPort: 80
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/www/html
name: vnextcloud
subPath: html
- mountPath: /var/www/html/custom_apps
name: vnextcloud
subPath: apps
- mountPath: /var/www/html/config
name: vnextcloud
subPath: config
- mountPath: /var/www/html/data
name: vimages
subPath: imgnc
- mountPath: /var/www/html/themes
name: vnextcloud
subPath: themes
restartPolicy: Always
volumes:
- name: vnextcloud
persistentVolumeClaim:
claimName: nfs-pvcnextcloud
- name: vimages
persistentVolumeClaim:
claimName: nfs-pvcimages
For creating the service I use this command line :
kubectl expose deployment nc --type=NodePort --name=svc-nc --port 80
And to access my nextcloud I tried the address #IP_MASTER:32500
My questions are:
How to check if a pod is working well ?to know if the problem is coming from the service or the pod
What should I do to have access to my nextcloud ?I didn't do the tuto part "Create self-signed certificates" because I don't know how to manage. Should it be on my other Linux machine or in my Kubernetes Cluster
1. Please consider using stable nextcloud helm chart
2. This tutorial is a little outdated and can be found also here
In kubernetes 1.16 release you should change in all your deployments apiVersion to apiVersion: apps/v1 please take a look at Deprecations and Removals.
In addition you should get an error ValidationError(Deployment.spec): missing required field "selector" so please add selectors in your deployment under Deployment.spec like:
selector:
matchLabels:
app: db
3. Finally Create self-signed certificates. this repo is using OMGWTFSSL - Self Signed SSL Certificate Generator. Once you provide necessary information like server name, path to your local hostpath and names for your SSL certificates it will be automatically created after one pod-run under specified hostpath:
volumes:
- name: certs
hostPath:
path: "/home/<someFolderLocation>/certs-pv"
those information should be re-used in the section Nginx reverse Proxy for nginx.conf
4. In your nc-svc.yaml you can change the service type to the type: NodePort
5. How to verify if your sercie is working properly:
kubectl get pods,svc,ep -o wide
Pods:
pod/nc-6d8694659d-5przx 1/1 Running 0 15m 10.244.0.6
Svc:
service/svc-nc NodePort 10.102.90.88 <none> 80:32500/TCP
Endpoints:
endpoints/svc-nc 10.244.0.6:80
You can test your service from inside the cluster running separate pod (f.e. ubuntu)
curl your_svc_name
you can verify if service discovery is working properly:
cat /etc/resolv.conf
nslokup svc_your_svc_name (your_svc_name.default.svc.cluster.local)
From outside the cluster using NodePort:
curl NODE_IP:NODE_PORT ( if not please verify your firewall rules)
Once you provided hostname for your nextcloud service you should use
curl -vH 'Host:specified_hostname' http://external_ip/ (using http or https according to your configuration)
In addition you can exec directly into your db pod
kuebctl exec -it db_pod -- /bin/bash and run
mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
mysqlshow -uroot -p$MYSQL_ROOT_PASSWORD --status nextcloud
6. What should I do to have access to my nextcloud ?
I didn't do the tuto part "Create self-signed certificates" because I don't know how to manage.
7. As described under point 3.
8. This part is not clear to me: from another machine with nginx route a CNAME to the service
Please refer to:
An ExternalName Service is a special case of Service that does not have selectors and uses DNS names instead.
Additional resources:
Expose your Kubernetes service from your own custom domains
What’s the difference between a CNAME and a Web Redirect?
Hope this help.

minikube + nginx + volumeMount not working

I'm following example found here.
I'm simply trying to understand how volumes work with Kubernetes. I'm testing locally so I need to contend with minikube. I'm trying to make this as simple as possible. I'm using nginx and would like to have it display content that is mounted from a folder on my localhost.
Environment:
macOS 10.12.5
minikube 0.20.0 + xhvve VM
I'm using the latest ngninx image from GitHub with no modifications.
This works perfectly when I run the docker image outside of minikube.
docker run --name flow-4 \
-v $(pwd)/website:/usr/share/nginx/html:ro \
-P -d nginx
But when I try to run it in minikube I get a 404 response when I visit the hosted page -always. Why?
Here are my kubernetes config files...
kubernets/deploy/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: flow-4
name: flow-4
spec:
replicas: 1
selector:
matchLabels:
run: flow-4
template:
metadata:
labels:
run: flow-4
spec:
containers:
- image: nginx
name: flow-4
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: flow-4-volume
volumes:
- name: flow-4-volume
hostPath:
path: /Users/myuser/website
kubernets/deploy/svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
run: flow-4
name: flow-4
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: flow-4
type: NodePort
Finally, I run it like this:
kubectl create -f kubernetes/deploy/
minikube service flow-4
When it opens in my browser, instead of seeing my index.html page in the website folder, I just get a '404 Not Found' message (above a nginx/1.13.3 footer)
Why am I getting 404? Is nginx not able to see the contents of my mounted folder? Does the VM hosting kubernetes not have access to my 'website' folder?
I suspect this is the problem. I ssh into the kubernetes pod
kubectl exec -it flow-4-1856897391-m0jh1 /bin/bash
When I look in the /usr/share/nginx/html folder, it is empty. If I manually add an index.html file, then I can see it in my browser. But why won't Kubernetes mount my local drive to this folder?
Update
There seems to be something wrong with mounting full paths from my /Users/** folder. Instead, I used the 'minikube mount' command to mount local folder container index.html into the minikube VM. Then in a separate terminal I started my deployment and it could see the index.html file just fine.
Here is my updated deployment.yaml file which has clearer file names to better explain the different folders and where they are mounted...
Here are my kubernetes config files...
kubernets/deploy/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: flow-4
name: flow-4
spec:
replicas: 1
selector:
matchLabels:
run: flow-4
template:
metadata:
labels:
run: flow-4
spec:
containers:
- image: nginx
name: flow-4
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: flow-4-volume
volumes:
- name: flow-4-volume
hostPath:
path: /kube-website
It's using the same svc.yaml file from earlier in the post.
I then ran the whole thing like this from my current directory.
1. mkdir local-website
2. echo 'Hello from local storage' > local-website/index.html
3. minikube mount local-website:/kube-website
Let this run....
In a new terminal, same folder...
4. kubectl create -f kubernetes/deploy/
Once all the pods are running...
5. minikube service flow-4
You should see the 'Hello from local storage' message great you in your browser. You can edit the local index.html file and then refresh your browser to see the contents change.
You can tear it all down with this...
kubectl delete deployments,services flow-4
Probably the folder you created is not in kubernetes node (it is minikube vm).
Try to create folder inside vm and try again
ssh -i ~/.minikube/machines/minikube/id_rsa docker#$(minikube ip)
mkdir /Users/myuser/website
Also take a look at minikube host mount folder feature

Kubernetes persistent storage causing files to disappear inside container

Trying to create a Drupal container on Kubernetes with the apache drupal image.
When a persistent volume is mounted at /var/www/html and inspecting the Drupal container with docker exec -it <drupal-container-name> bash there are no files visible. Thus no files can be served.
Workflow
1 - Create google compute disk
gcloud compute disks create --size=20GB --zone=us-central1-c drupal-1
2 - Register the newly created google compute disk to the kubernetes cluster instance
kubectl create -f gce-volumes.yaml
3 - Create Drupal pod
kubectl create -f drupal-deployment.yaml
The definition files are inspired from the wordpress example, my drupal-deployment.yaml:
apiVersion: v1
kind: Service
metadata:
name: drupal
labels:
app: drupal
spec:
ports:
- port: 80
selector:
app: drupal
tier: frontend
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dp-pv-claim
labels:
app: drupal
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: drupal
labels:
app: drupal
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: drupal
tier: frontend
spec:
containers:
- name: drupal
image: drupal:8.2.3-apache
ports:
- name: drupal
containerPort: 80
volumeMounts:
- name: drupal-persistent-storage
mountPath: /var/www/html
volumes:
- name: drupal-persistent-storage
persistentVolumeClaim:
claimName: dp-pv-claim
And the gce-volumes.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: drupal-pv-1
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: drupal-1
fsType: ext4
What's causing the files to disappear? How can I successfully persist the Drupal installation files at /var/www/html inside the container?
You are mounting the persistentStorage at /var/www/html, so if you had files at that location in your base image, the folder is replaced by the mount, and the files from the base image are no longer available.
If your content will be dynamic and you want to save those files in the persistentStorage, this is the way to do it, however you will need to populate the persistentStorage initially.
One solution would be to have the files in a different folder in the base image, and copy them over when you run the Pod, however this will happen every time you start the Pod, and may overwrite your changes, unless your copy script checks first if the folder is empty.
Another option is to have a Job do this only once (before you run your Drupal Pod and mount this storage.)
Note:
GKEPersistentStorage only allows ReadWriteOnce (which can be mounted only on a single Pod) or ReadOnlyMany (i.e. readable only but can be mounted on many Pods) and it cannot be mounted with different modes at the same time, so in the end you can only run one of these Pods. (i.e. it will not scale)

Resources