minikube + nginx + volumeMount not working - nginx

I'm following example found here.
I'm simply trying to understand how volumes work with Kubernetes. I'm testing locally so I need to contend with minikube. I'm trying to make this as simple as possible. I'm using nginx and would like to have it display content that is mounted from a folder on my localhost.
Environment:
macOS 10.12.5
minikube 0.20.0 + xhvve VM
I'm using the latest ngninx image from GitHub with no modifications.
This works perfectly when I run the docker image outside of minikube.
docker run --name flow-4 \
-v $(pwd)/website:/usr/share/nginx/html:ro \
-P -d nginx
But when I try to run it in minikube I get a 404 response when I visit the hosted page -always. Why?
Here are my kubernetes config files...
kubernets/deploy/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: flow-4
name: flow-4
spec:
replicas: 1
selector:
matchLabels:
run: flow-4
template:
metadata:
labels:
run: flow-4
spec:
containers:
- image: nginx
name: flow-4
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: flow-4-volume
volumes:
- name: flow-4-volume
hostPath:
path: /Users/myuser/website
kubernets/deploy/svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
run: flow-4
name: flow-4
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: flow-4
type: NodePort
Finally, I run it like this:
kubectl create -f kubernetes/deploy/
minikube service flow-4
When it opens in my browser, instead of seeing my index.html page in the website folder, I just get a '404 Not Found' message (above a nginx/1.13.3 footer)
Why am I getting 404? Is nginx not able to see the contents of my mounted folder? Does the VM hosting kubernetes not have access to my 'website' folder?
I suspect this is the problem. I ssh into the kubernetes pod
kubectl exec -it flow-4-1856897391-m0jh1 /bin/bash
When I look in the /usr/share/nginx/html folder, it is empty. If I manually add an index.html file, then I can see it in my browser. But why won't Kubernetes mount my local drive to this folder?
Update
There seems to be something wrong with mounting full paths from my /Users/** folder. Instead, I used the 'minikube mount' command to mount local folder container index.html into the minikube VM. Then in a separate terminal I started my deployment and it could see the index.html file just fine.
Here is my updated deployment.yaml file which has clearer file names to better explain the different folders and where they are mounted...
Here are my kubernetes config files...
kubernets/deploy/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: flow-4
name: flow-4
spec:
replicas: 1
selector:
matchLabels:
run: flow-4
template:
metadata:
labels:
run: flow-4
spec:
containers:
- image: nginx
name: flow-4
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: flow-4-volume
volumes:
- name: flow-4-volume
hostPath:
path: /kube-website
It's using the same svc.yaml file from earlier in the post.
I then ran the whole thing like this from my current directory.
1. mkdir local-website
2. echo 'Hello from local storage' > local-website/index.html
3. minikube mount local-website:/kube-website
Let this run....
In a new terminal, same folder...
4. kubectl create -f kubernetes/deploy/
Once all the pods are running...
5. minikube service flow-4
You should see the 'Hello from local storage' message great you in your browser. You can edit the local index.html file and then refresh your browser to see the contents change.
You can tear it all down with this...
kubectl delete deployments,services flow-4

Probably the folder you created is not in kubernetes node (it is minikube vm).
Try to create folder inside vm and try again
ssh -i ~/.minikube/machines/minikube/id_rsa docker#$(minikube ip)
mkdir /Users/myuser/website
Also take a look at minikube host mount folder feature

Related

Error when access to Nextcloud in Kubernetes

My goal is :
create a pod with Nextcloud
create a service to access this pod
from another machine with nginx route a CNAME to the service
I tried to deploy a pod with Nextcloud and a service to access it but actually I can't access it. I have an error :
message ERR_SSL_PROTOCOL_ERROR.
I just followed a tutorial at the beginning but I didn't want to use nginx like it was explained because I have it on another machine.
When I look at pods (nextcloud + db) and services they look ok but I have no response when I try to access nextcloud.
(nc = nextcloud)
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nc
name: nc
spec:
replicas: 1
selector:
matchLabels:
app: nc
strategy:
type: Recreate
template:
metadata:
labels:
app: nc
spec:
containers:
- env:
- name: DEBUG
value: "false"
- name: NEXTCLOUD_URL
value: http://test.fr
- name: NEXTCLOUD_ADMIN_USER
value: admin
- name: NEXTCLOUD_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: NEXTCLOUD_ADMIN_PASSWORD
- name: NEXTCLOUD_UPLOAD_MAX_FILESIZE
value: 4G
- name: NEXTCLOUD_MAX_FILE_UPLOADS
value: "20"
- name: MYSQL_DATABASE
value: nextcloud
- name: MYSQL_HOST
value: mariadb
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_USER
value: nextcloud
name: nc
image: nextcloud
ports:
- containerPort: 80
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/www/html
name: vnextcloud
subPath: html
- mountPath: /var/www/html/custom_apps
name: vnextcloud
subPath: apps
- mountPath: /var/www/html/config
name: vnextcloud
subPath: config
- mountPath: /var/www/html/data
name: vimages
subPath: imgnc
- mountPath: /var/www/html/themes
name: vnextcloud
subPath: themes
restartPolicy: Always
volumes:
- name: vnextcloud
persistentVolumeClaim:
claimName: nfs-pvcnextcloud
- name: vimages
persistentVolumeClaim:
claimName: nfs-pvcimages
For creating the service I use this command line :
kubectl expose deployment nc --type=NodePort --name=svc-nc --port 80
And to access my nextcloud I tried the address #IP_MASTER:32500
My questions are:
How to check if a pod is working well ?to know if the problem is coming from the service or the pod
What should I do to have access to my nextcloud ?I didn't do the tuto part "Create self-signed certificates" because I don't know how to manage. Should it be on my other Linux machine or in my Kubernetes Cluster
1. Please consider using stable nextcloud helm chart
2. This tutorial is a little outdated and can be found also here
In kubernetes 1.16 release you should change in all your deployments apiVersion to apiVersion: apps/v1 please take a look at Deprecations and Removals.
In addition you should get an error ValidationError(Deployment.spec): missing required field "selector" so please add selectors in your deployment under Deployment.spec like:
selector:
matchLabels:
app: db
3. Finally Create self-signed certificates. this repo is using OMGWTFSSL - Self Signed SSL Certificate Generator. Once you provide necessary information like server name, path to your local hostpath and names for your SSL certificates it will be automatically created after one pod-run under specified hostpath:
volumes:
- name: certs
hostPath:
path: "/home/<someFolderLocation>/certs-pv"
those information should be re-used in the section Nginx reverse Proxy for nginx.conf
4. In your nc-svc.yaml you can change the service type to the type: NodePort
5. How to verify if your sercie is working properly:
kubectl get pods,svc,ep -o wide
Pods:
pod/nc-6d8694659d-5przx 1/1 Running 0 15m 10.244.0.6
Svc:
service/svc-nc NodePort 10.102.90.88 <none> 80:32500/TCP
Endpoints:
endpoints/svc-nc 10.244.0.6:80
You can test your service from inside the cluster running separate pod (f.e. ubuntu)
curl your_svc_name
you can verify if service discovery is working properly:
cat /etc/resolv.conf
nslokup svc_your_svc_name (your_svc_name.default.svc.cluster.local)
From outside the cluster using NodePort:
curl NODE_IP:NODE_PORT ( if not please verify your firewall rules)
Once you provided hostname for your nextcloud service you should use
curl -vH 'Host:specified_hostname' http://external_ip/ (using http or https according to your configuration)
In addition you can exec directly into your db pod
kuebctl exec -it db_pod -- /bin/bash and run
mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
mysqlshow -uroot -p$MYSQL_ROOT_PASSWORD --status nextcloud
6. What should I do to have access to my nextcloud ?
I didn't do the tuto part "Create self-signed certificates" because I don't know how to manage.
7. As described under point 3.
8. This part is not clear to me: from another machine with nginx route a CNAME to the service
Please refer to:
An ExternalName Service is a special case of Service that does not have selectors and uses DNS names instead.
Additional resources:
Expose your Kubernetes service from your own custom domains
What’s the difference between a CNAME and a Web Redirect?
Hope this help.

How to set external IP for nginx-ingress controller in private cloud kubernetes cluster

I am setting up a kubernetes cluster to run hyperledger fabric apps. My cluster is on a private cloud hence I don't have a load balancer. How do I set an IP address for my nginx-ingress-controller(pending) to expose my services? I think it is interfering with my creation of pods since when I run kubectl get pods, I see very many evicted pods. I am using certmanager which I think also needs IPs.
CA_POD=$(kubectl get pods -n cas -l "app=hlf-ca,release=ca" -o jsonpath="{.items[0].metadata.name}")
This does not create any pods.
nginx-ingress-controller-5bb5cd56fb-lckmm 1/1 Running
nginx-ingress-default-backend-dc47d79c-8kqbp 1/1 Running
The rest take the form
nginx-ingress-controller-5bb5cd56fb-d48sj 0/1 Evicted
ca-hlf-ca-5c5854bd66-nkcst 0/1 Pending 0 0s
ca-postgresql-0 0/1 Pending 0 0s
I would like to create pods from which I can run exec commands like
kubectl exec -n cas $CA_POD -- cat /var/hyperledger/fabric-ca/msp/signcertscert.pem
You are not exposing nginx-controller IP address, but nginx's service via node port. For example:
piVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-controller
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
app: nginx
In this case you'd be able to reach your service like
curl -v <NODE_EXTERNAL_IP>:30080
To the question, why your pods are in pending state, pls describe misbehaving pods:
kubectl describe pod nginx-ingress-controller-5bb5cd56fb-d48sj
Best approach is to use helm
helm install stable/nginx-ingress

Kubernetes Networkpolicy does not work as expected

I am fairly new to networkpolicies on Calico. I have created the following NetworkPolicy on my cluster:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: nginxnp-po
namespace: default
spec:
podSelector:
matchLabels:
run: nginxnp
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
acces: frontend
ports:
- port: 80
This is how I read it: All pods that have the selector run=nginxnp are only accessible on port 80 from every pod that has the selector access=frontend.
Here is my nginx pod (with a running nginx in it):
$ kubectl get pods -l run=nginxnp
NAME READY STATUS RESTARTS AGE
nginxnp-9b49f4b8d-tkz6q 1/1 Running 0 36h
I created a busybox container like this:
$ kubectl run busybox --image=busybox --restart=Never --labels=access=frontend -- sleep 3600
I can see that it matches the selector access=frontend:
$ kubectl get pods -l access=frontend
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 6m30s
However when I exec into the busybox pod and try to wget the nginx pod, the connection is still refused.
I also tried setting an egress rule that allows the traffic the other way round, but this didn't do anything as well. As I understood networkpolicies: When no rule is set, nothing is blocked. Hence, when I set no egress rule, egress should not be blocked.
If I delete the networkpolicy it works. Any pointers are highly appreciated.
There is a typo in the NetworkPolicy template acces: frontend should be access: frontend
ingress:
- from:
- podSelector:
matchLabels:
acces: frontend

Change index.html nginx kubernetes deployment

I have this deployment:
I was able to edit the index page at one of my pods, but how can I commit it to the deployment image? So when I scale the application all new pods will have the same image with index edited.
It's worked for me
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html/index.html
name: nginx-conf
subPath: index.html
volumes:
- name: nginx-conf
configMap:
name: nginx-index-html-configmap
And simple configmap with:
data:
<html></html>
You will have to create a new image with the updated index.html, and then use this new image on your deployments.
If you want index.html to be easily modifiable, then
Create a new image without the index.html file
Store the contents of index.html in a configMap
Volume mount the configMap (as explained here) onto the path where you want index.html to be mounted
Then, whenever you want to update the index.html, you just have to update the ConfigMap and wait for a few minutes. Kubernetes will take care of syncing the updated index.html.
Following this answer and this readme
One can create a configMap with the following command
kubectl create configmap nginx-index-html-configmap --from-file=index.html -o yaml --dry-run
And then add this cm as a volumeMount in k8s deployment object.
Use init container for any preprocessing or as stated above change docker image accordingly before using it.

Kubernetes persistent storage causing files to disappear inside container

Trying to create a Drupal container on Kubernetes with the apache drupal image.
When a persistent volume is mounted at /var/www/html and inspecting the Drupal container with docker exec -it <drupal-container-name> bash there are no files visible. Thus no files can be served.
Workflow
1 - Create google compute disk
gcloud compute disks create --size=20GB --zone=us-central1-c drupal-1
2 - Register the newly created google compute disk to the kubernetes cluster instance
kubectl create -f gce-volumes.yaml
3 - Create Drupal pod
kubectl create -f drupal-deployment.yaml
The definition files are inspired from the wordpress example, my drupal-deployment.yaml:
apiVersion: v1
kind: Service
metadata:
name: drupal
labels:
app: drupal
spec:
ports:
- port: 80
selector:
app: drupal
tier: frontend
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dp-pv-claim
labels:
app: drupal
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: drupal
labels:
app: drupal
spec:
strategy:
type: Recreate
template:
metadata:
labels:
app: drupal
tier: frontend
spec:
containers:
- name: drupal
image: drupal:8.2.3-apache
ports:
- name: drupal
containerPort: 80
volumeMounts:
- name: drupal-persistent-storage
mountPath: /var/www/html
volumes:
- name: drupal-persistent-storage
persistentVolumeClaim:
claimName: dp-pv-claim
And the gce-volumes.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: drupal-pv-1
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: drupal-1
fsType: ext4
What's causing the files to disappear? How can I successfully persist the Drupal installation files at /var/www/html inside the container?
You are mounting the persistentStorage at /var/www/html, so if you had files at that location in your base image, the folder is replaced by the mount, and the files from the base image are no longer available.
If your content will be dynamic and you want to save those files in the persistentStorage, this is the way to do it, however you will need to populate the persistentStorage initially.
One solution would be to have the files in a different folder in the base image, and copy them over when you run the Pod, however this will happen every time you start the Pod, and may overwrite your changes, unless your copy script checks first if the folder is empty.
Another option is to have a Job do this only once (before you run your Drupal Pod and mount this storage.)
Note:
GKEPersistentStorage only allows ReadWriteOnce (which can be mounted only on a single Pod) or ReadOnlyMany (i.e. readable only but can be mounted on many Pods) and it cannot be mounted with different modes at the same time, so in the end you can only run one of these Pods. (i.e. it will not scale)

Resources