I've created a pod that works as Nginx Proxy. It works well with the default configuration but when I add my custom configuration via ConfigMap it crashes.
This is the only log I have.
/bin/bash: /etc/nginx/nginx.conf: Read-only file system
My deployment.yaml
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: false
volumes:
- name: nginx-config
configMap:
name: nginx-config
If could help I've found this answer on StackOverflow but I don't understand how to do those passages.
The config is correct. Reading the Nginx Docker documentation I've found that I have to add command: [ "/bin/bash", "-c", "nginx -g 'daemon off;'" ]
Related
stuck at an ansible hackkerrank lab(fresco play) that asks to install nginx and postgresql and ensure they are running.
But after finishing the code and running the exam it is checking for redirection of nginx server after restart to google.com.
Has anyone faced this issue?
Below is my code to install and ensure services are running:
name: 'To install packages'
hosts: localhost
connection: local
become: yes
become_method: sudo
tasks:
-
apt:
name: "{{item}}"
state: present
with_items:
- nginx
- postgresql
apt: name=nginx state=latest
- name: start nginx
service:
name: nginx
state: started
apt: name=postgresql state=latest
- name: start postgresql
service:
name: postgresql
state: started
Wrote these in two separate playbooks as of now and need help in redirection of nginx to google.com
You need to write your nginx configuration file (in this case specifying to redirect traffic to google) and copy to the /etc/nginx/nginx.conf file.
- name: write nginx.conf
template:
src: <path_to_file>
dest: /etc/nginx/nginx.conf
After this you should restart the nginx service.
Thanks!
Below code worked for me:
Define your port number and the site you wish to redirect nginx server to in .j2 file in Templates folder under your roles.
Include a task in Playbook to set the template to /etc/nginx/sites-enabled/default folder. Include a notify for the handler defined in 'Handlers' folder.
In some cases if nginx server doesnt restart, use 'sudo service nginx restart' at the terminal before testing your code.
Ansible-Sibelius (Try it Out- Write a Playbook)
#installing nginx and postgresql
- name: Install nginx
apt: name=nginx state=latest
tags: nginx
- name: restart nginx
service:
name: nginx
state: started
- name: Install PostgreSQL
apt: name=postgresql state=latest
tags: PostgreSQL
- name: Start PostgreSQL
service:
name: postgresql
state: started
- name: Set the configuration for the template file
template:
src: /<path-to-your-roles>/templates/sites-enabled.j2
dest: /etc/nginx/sites-enabled/default
notify: restart nginx
My goal is :
create a pod with Nextcloud
create a service to access this pod
from another machine with nginx route a CNAME to the service
I tried to deploy a pod with Nextcloud and a service to access it but actually I can't access it. I have an error :
message ERR_SSL_PROTOCOL_ERROR.
I just followed a tutorial at the beginning but I didn't want to use nginx like it was explained because I have it on another machine.
When I look at pods (nextcloud + db) and services they look ok but I have no response when I try to access nextcloud.
(nc = nextcloud)
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nc
name: nc
spec:
replicas: 1
selector:
matchLabels:
app: nc
strategy:
type: Recreate
template:
metadata:
labels:
app: nc
spec:
containers:
- env:
- name: DEBUG
value: "false"
- name: NEXTCLOUD_URL
value: http://test.fr
- name: NEXTCLOUD_ADMIN_USER
value: admin
- name: NEXTCLOUD_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud
key: NEXTCLOUD_ADMIN_PASSWORD
- name: NEXTCLOUD_UPLOAD_MAX_FILESIZE
value: 4G
- name: NEXTCLOUD_MAX_FILE_UPLOADS
value: "20"
- name: MYSQL_DATABASE
value: nextcloud
- name: MYSQL_HOST
value: mariadb
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_USER
value: nextcloud
name: nc
image: nextcloud
ports:
- containerPort: 80
protocol: TCP
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/www/html
name: vnextcloud
subPath: html
- mountPath: /var/www/html/custom_apps
name: vnextcloud
subPath: apps
- mountPath: /var/www/html/config
name: vnextcloud
subPath: config
- mountPath: /var/www/html/data
name: vimages
subPath: imgnc
- mountPath: /var/www/html/themes
name: vnextcloud
subPath: themes
restartPolicy: Always
volumes:
- name: vnextcloud
persistentVolumeClaim:
claimName: nfs-pvcnextcloud
- name: vimages
persistentVolumeClaim:
claimName: nfs-pvcimages
For creating the service I use this command line :
kubectl expose deployment nc --type=NodePort --name=svc-nc --port 80
And to access my nextcloud I tried the address #IP_MASTER:32500
My questions are:
How to check if a pod is working well ?to know if the problem is coming from the service or the pod
What should I do to have access to my nextcloud ?I didn't do the tuto part "Create self-signed certificates" because I don't know how to manage. Should it be on my other Linux machine or in my Kubernetes Cluster
1. Please consider using stable nextcloud helm chart
2. This tutorial is a little outdated and can be found also here
In kubernetes 1.16 release you should change in all your deployments apiVersion to apiVersion: apps/v1 please take a look at Deprecations and Removals.
In addition you should get an error ValidationError(Deployment.spec): missing required field "selector" so please add selectors in your deployment under Deployment.spec like:
selector:
matchLabels:
app: db
3. Finally Create self-signed certificates. this repo is using OMGWTFSSL - Self Signed SSL Certificate Generator. Once you provide necessary information like server name, path to your local hostpath and names for your SSL certificates it will be automatically created after one pod-run under specified hostpath:
volumes:
- name: certs
hostPath:
path: "/home/<someFolderLocation>/certs-pv"
those information should be re-used in the section Nginx reverse Proxy for nginx.conf
4. In your nc-svc.yaml you can change the service type to the type: NodePort
5. How to verify if your sercie is working properly:
kubectl get pods,svc,ep -o wide
Pods:
pod/nc-6d8694659d-5przx 1/1 Running 0 15m 10.244.0.6
Svc:
service/svc-nc NodePort 10.102.90.88 <none> 80:32500/TCP
Endpoints:
endpoints/svc-nc 10.244.0.6:80
You can test your service from inside the cluster running separate pod (f.e. ubuntu)
curl your_svc_name
you can verify if service discovery is working properly:
cat /etc/resolv.conf
nslokup svc_your_svc_name (your_svc_name.default.svc.cluster.local)
From outside the cluster using NodePort:
curl NODE_IP:NODE_PORT ( if not please verify your firewall rules)
Once you provided hostname for your nextcloud service you should use
curl -vH 'Host:specified_hostname' http://external_ip/ (using http or https according to your configuration)
In addition you can exec directly into your db pod
kuebctl exec -it db_pod -- /bin/bash and run
mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
mysqlshow -uroot -p$MYSQL_ROOT_PASSWORD --status nextcloud
6. What should I do to have access to my nextcloud ?
I didn't do the tuto part "Create self-signed certificates" because I don't know how to manage.
7. As described under point 3.
8. This part is not clear to me: from another machine with nginx route a CNAME to the service
Please refer to:
An ExternalName Service is a special case of Service that does not have selectors and uses DNS names instead.
Additional resources:
Expose your Kubernetes service from your own custom domains
What’s the difference between a CNAME and a Web Redirect?
Hope this help.
I'm following example found here.
I'm simply trying to understand how volumes work with Kubernetes. I'm testing locally so I need to contend with minikube. I'm trying to make this as simple as possible. I'm using nginx and would like to have it display content that is mounted from a folder on my localhost.
Environment:
macOS 10.12.5
minikube 0.20.0 + xhvve VM
I'm using the latest ngninx image from GitHub with no modifications.
This works perfectly when I run the docker image outside of minikube.
docker run --name flow-4 \
-v $(pwd)/website:/usr/share/nginx/html:ro \
-P -d nginx
But when I try to run it in minikube I get a 404 response when I visit the hosted page -always. Why?
Here are my kubernetes config files...
kubernets/deploy/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: flow-4
name: flow-4
spec:
replicas: 1
selector:
matchLabels:
run: flow-4
template:
metadata:
labels:
run: flow-4
spec:
containers:
- image: nginx
name: flow-4
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: flow-4-volume
volumes:
- name: flow-4-volume
hostPath:
path: /Users/myuser/website
kubernets/deploy/svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
run: flow-4
name: flow-4
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: flow-4
type: NodePort
Finally, I run it like this:
kubectl create -f kubernetes/deploy/
minikube service flow-4
When it opens in my browser, instead of seeing my index.html page in the website folder, I just get a '404 Not Found' message (above a nginx/1.13.3 footer)
Why am I getting 404? Is nginx not able to see the contents of my mounted folder? Does the VM hosting kubernetes not have access to my 'website' folder?
I suspect this is the problem. I ssh into the kubernetes pod
kubectl exec -it flow-4-1856897391-m0jh1 /bin/bash
When I look in the /usr/share/nginx/html folder, it is empty. If I manually add an index.html file, then I can see it in my browser. But why won't Kubernetes mount my local drive to this folder?
Update
There seems to be something wrong with mounting full paths from my /Users/** folder. Instead, I used the 'minikube mount' command to mount local folder container index.html into the minikube VM. Then in a separate terminal I started my deployment and it could see the index.html file just fine.
Here is my updated deployment.yaml file which has clearer file names to better explain the different folders and where they are mounted...
Here are my kubernetes config files...
kubernets/deploy/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: flow-4
name: flow-4
spec:
replicas: 1
selector:
matchLabels:
run: flow-4
template:
metadata:
labels:
run: flow-4
spec:
containers:
- image: nginx
name: flow-4
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: flow-4-volume
volumes:
- name: flow-4-volume
hostPath:
path: /kube-website
It's using the same svc.yaml file from earlier in the post.
I then ran the whole thing like this from my current directory.
1. mkdir local-website
2. echo 'Hello from local storage' > local-website/index.html
3. minikube mount local-website:/kube-website
Let this run....
In a new terminal, same folder...
4. kubectl create -f kubernetes/deploy/
Once all the pods are running...
5. minikube service flow-4
You should see the 'Hello from local storage' message great you in your browser. You can edit the local index.html file and then refresh your browser to see the contents change.
You can tear it all down with this...
kubectl delete deployments,services flow-4
Probably the folder you created is not in kubernetes node (it is minikube vm).
Try to create folder inside vm and try again
ssh -i ~/.minikube/machines/minikube/id_rsa docker#$(minikube ip)
mkdir /Users/myuser/website
Also take a look at minikube host mount folder feature
I have a "Deployment" in Kubernetes which works fine in GKE, but fails in MiniKube.
I have a Pod with 2 containers:-
(1) Nginx as reverse proxy ( reads secrets and configMap volumes at /etc/tls & /etc/nginx respectively )
(2) A JVM based service listening on localhost
The problem in the minikube deployment is that the Nginx container fails to read the TLS certs which appear not to be there - i.e. the volume mount of the secrets to the Pod appears to have failed.
nginx: [emerg] BIO_new_file("/etc/tls/server.crt") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/tls/server.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file)
But if I do "minikube logs" I get a large amount of seemingly "successful" tls volume mounts...
MountVolume.SetUp succeeded for volume "kubernetes.io/secret/61701667-eca7-11e6-ae16-080027187aca-scriptwriter-tls" (spec.Name: "scriptwriter-tls")
And the secret themselves are in the cluster okay ...
$ kubectl get secrets scriptwriter-tls
NAME TYPE DATA AGE
scriptwriter-tls Opaque 3 1h
So it would appear that as far as miniKube is concerned all is well from a secrets point of view. But on the other hand the nginx container can't see it.
I can't logon to the container either since it keeps terminating.
For completeness the relevant sections from the Deployment yaml ...
Firstly the nginx config...
- name: nginx
image: nginx:1.7.9
imagePullPolicy: Always
ports:
- containerPort: 443
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx", "-s", "quit"]
volumeMounts:
- name: "nginx-scriptwriter-dev-proxf-conf"
mountPath: "/etc/nginx/conf.d"
- name: "scriptwriter-tls"
mountPath: "/etc/tls"
And secondly the volumes themselves at the container level ...
volumes:
- name: "scriptwriter-tls"
secret:
secretName: "scriptwriter-tls"
- name: "nginx-scriptwriter-dev-proxf-conf"
configMap:
name: "nginx-scriptwriter-dev-proxf-conf"
items:
- key: "nginx-scriptwriter.conf"
path: "nginx-scriptwriter.conf"
Any pointers of help would be greatly appreciated.
I am a first class numpty! :-) Sometimes the error is just the error! So the problem was that the secrets are created using local $HOME/.ssh/* certs ... and if you are generating them from different computers with different certs then guess what?! So all fixed now :-)
I have a task in a playbook that tries to restart nginx via a handler as per usual:
- name: run migrations
command: bash -lc "some command"
notify: restart nginx
The playbook however breaks on this error:
NOTIFIED: [deploy | restart nginx] ********************************************
failed: [REDACTED] => {"failed": true}
msg: failure 1 running systemctl show for 'nginx.service': Failed to get D-Bus connection: No connection to service manager.
The handler is standard:
- name: restart nginx
service: name=nginx state=restarted enabled=yes
And the way that I've setup nginx is not out of the ordinary as well:
- name: install nginx
apt: name=nginx state=present
sudo: yes
- name: copy nginx.conf to the server
template: src=nginx.conf.j2 dest=/etc/nginx/nginx.conf
sudo: yes
- name: delete default virtualhost
file: path=/etc/nginx/sites-enabled/default state=absent
sudo: yes
- name: add mysite site-available
template: src=mysite.conf.j2 dest=/etc/nginx/sites-available/mysite.conf
sudo: yes
- name: link mysite site-enabled
file: path=/etc/nginx/sites-enabled/mysite src=/etc/nginx/sites-available/mysite.conf state=link
sudo: yes
This is on a ubuntu-14-04-x64 VPS.
The handler was:
- name: restart nginx
service: name=nginx state=restarted enabled=yes
It seems that the state and enabled flags cannot both be present. By trimming the above to the following, it worked.
- name: restart nginx
service: name=nginx state=restarted
Why this is, and why it started breaking suddenly, I do not know.