Problem with Kubernetes and Nginx. Error code - nginx

I'm trying to deploy my first Kubernetes application. I've set up everyting but now when I try to acces it over the clusters IP adress I get this message:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\": No policy matched.",
"reason": "Forbidden",
"details": {
},
"code": 403
}
Anybody knows what could be the problem? Does it has anything to do with NGNIX?
Also here is my .yaml file:
# Certificate
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: ${APP_NAME}
namespace: gitlab-managed-apps
spec:
secretName: ${APP_NAME}-cert
dnsNames:
- ${URL}
- www.${URL}
acme:
config:
- domains:
- ${URL}
- www.${URL}
http01:
ingressClass: nginx
issuerRef:
name: ${CERT_ISSUER}
kind: ClusterIssuer
---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ${APP_NAME}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
nginx.ingress.kubernetes.io/from-to-www-redirect: 'true'
spec:
tls:
- secretName: ${APP_NAME}-cert
hosts:
- ${URL}
- www.${URL}
rules:
- host: ${URL}
http:
paths:
- backend:
serviceName: ${APP_NAME}
servicePort: 80
---
# Service
apiVersion: v1
kind: Service
metadata:
name: ${APP_NAME}
labels:
app: ${CI_PROJECT_NAME}
spec:
selector:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
ports:
- name: http
port: 80
targetPort: http
---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${APP_NAME}
labels:
app: ${CI_PROJECT_NAME}
spec:
replicas: ${REPLICAS}
revisionHistoryLimit: 0
selector:
matchLabels:
app: ${CI_PROJECT_NAME}
template:
metadata:
labels:
name: ${APP_NAME}
app: ${CI_PROJECT_NAME}
spec:
containers:
- name: webapp
image: eu.gcr.io/my-site/my-site.com:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
env:
- name: COMMIT_SHA
value: ${CI_COMMIT_SHA}
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
resources:
requests:
memory: '16Mi'
limits:
memory: '64Mi'
imagePullSecrets:
- name: ${REGISTRY_PULL_SECRET}
I would really appreciate it if anybody could help me!

Just add the path in you ingress:
rules:
- host: ${URL}
http:
paths:
- backend:
serviceName: ${APP_NAME}
servicePort: 80
path: /
https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource

Related

Read only file system error (EFS as persistent storage in EKS)

----Storages files----
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: aws.io/aws-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-claim
namespace: dev
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
--------Deployment file--------------------
apiVersion: v1
kind: ServiceAccount
metadata:
name: efs-provisioner
namespace: dev
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: efs-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: efs-provisioner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: efs-provisioner
subjects:
- kind: ServiceAccount
name: efs-provisioner
namespace: dev
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-efs-provisioner
namespace: dev
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-efs-provisioner
namespace: dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: leader-locking-efs-provisioner
subjects:
- kind: ServiceAccount
name: efs-provisioner
namespace: dev
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: efs-provisioner
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: efs-provisioner
template:
metadata:
labels:
app: efs-provisioner
spec:
serviceAccount: efs-provisioner
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest
env:
- name: FILE_SYSTEM_ID
valueFrom:
configMapKeyRef:
name: efs-provisioner-config
key: file.system.id
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: efs-provisioner-config
key: aws.region
- name: DNS_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner-config
key: dns.name
optional: true
- name: PROVISIONER_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner-config
key: provisioner.name
volumeMounts:
- name: pv-volume
mountPath: /efs-mount
volumes:
- name: pv-volume
nfs:
server: <File-system-dns>
path: /
---
apiVersion: v1
kind: ConfigMap
metadata:
name: efs-provisioner-config
namespace: dev
data:
file.system.id: <File-system-id>
aws.region: us-east-2
provisioner.name: aws.io/aws-efs
dns.name: ""
------release file----
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: airflow
namespace: dev
annotations:
flux.weave.works/automated: "true"
spec:
releaseName: airflow-dev
chart:
repository: https://airflow.apache.org
name: airflow
version: 1.6.0
values:
fernetKey: <fernet-key>
defaultAirflowTag: "2.3.0"
env:
- name: "AIRFLOW__KUBERNETES__DAGS_IN_IMAGE"
value: "False"
- name: "AIRFLOW__KUBERNETES__NAMESPACE"
value: "dev"
value: "apache/airflow"
- name: "AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG"
value: "latest"
- name: "AIRFLOW__KUBERNETES__RUN_AS_USER"
value: "50000"
- name: "AIRFLOW__CORE__LOAD_EXAMPLES"
value: "False"
executor: "KubernetesExecutor"
dags:
persistence:
enabled: true
size: 20Gi
storageClassName: aws-efs
existingClaim: efs-claim
accessMode: ReadWriteMany
gitSync:
enabled: true
repo: git#bitbucket.org: <git-repo>
branch: master
maxFailures: 0
subPath: ""
sshKeySecret: airflow-git-private-dags
wait: 30
When im going to the scheduler pod, and going to directory /opt/airflow/dags$ , im get the read only file system error. But when i did "df -h", i can see that the file system is mounted on the pod. But i get read only error.
Kubectl get pv -n dev
This gives me the PV has "RWX" access and it shows that it has been mounted to my airflow trigger and airflow scheduler pods

Rabbitmq with nginx ingress

I'm using nginx controller with Minikube. I can access rabbitmq management but when i access the queues i got this error:
Not found
The object you clicked on was not found; it may have been deleted on the server.
if i use port-forward it's working correctly
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rabbitmq-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
labels:
name: rabbitmq-ingress
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /rabbit(/|$)(.*)
backend:
service:
name: rabbitmq-management
port:
number: 15672
Looks like you forget to mention host into ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-wildcard-host
spec:
rules:
- host: "foo.bar.com"
http:
paths:
- pathType: Prefix
path: "/bar"
backend:
service:
name: service1
port:
number: 80
host: "foo.bar.com"
Read more : https://kubernetes.io/docs/concepts/services-networking/ingress/#hostname-wildcards

kubernetes nginx ingress controller return 404

Following this guide, I created an ingress controller on my local kubernetes server, the only difference is that it is created as a NodePort.
I have done some test deployments, with respective services and everything works, here the file
Deploy1:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: helloworld1
spec:
selector:
matchLabels:
app: helloworld1
replicas: 1
template:
metadata:
labels:
app: helloworld1
spec:
containers:
- name: hello
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
Deploy2:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: helloworld2
spec:
selector:
matchLabels:
app: helloworld2
replicas: 1
template:
metadata:
labels:
app: helloworld2
spec:
containers:
- name: hello
image: gcr.io/google-samples/hello-app:2.0
ports:
- containerPort: 8080
Deploy3:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: geojson-example
spec:
selector:
matchLabels:
app: geojson-example
replicas: 1
template:
metadata:
labels:
app: geojson-example
spec:
containers:
- name: geojson-container
image: "nmex87/geojsonexample:latest"
ports:
- containerPort: 8080
Service1:
apiVersion: v1
kind: Service
metadata:
name: helloworld1
spec:
# type: NodePort
ports:
- port: 8080
selector:
app: helloworld1
Service2:
apiVersion: v1
kind: Service
metadata:
name: helloworld2
spec:
# type: NodePort
ports:
- port: 8080
selector:
app: helloworld2
Service3:
apiVersion: v1
kind: Service
metadata:
name: geojson-example
spec:
ports:
- port: 8080
selector:
app: geojson-example
This is the ingress controller:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/default-backend: geojson-example
spec:
rules:
- http:
paths:
- path: /geo
pathType: Prefix
backend:
service:
name: geojson-example
port:
number: 8080
- path: /test1
pathType: Prefix
backend:
service:
name: helloworld1
port:
number: 8080
- path: /test2
pathType: Prefix
backend:
service:
name: helloworld2
port:
number: 8080
When I do a GET on myServer:myPort/test1 or /test2 everything works, on /geo i get the following answer
{
"timestamp": "2021-03-09T17:02:36.606+00:00",
"status": 404,
"error": "Not Found",
"message": "",
"path": "/geo"
}
Why??
if I create a pod, and from inside the pod, i do a curl on geojson-example it works, but from the external, i obtain a 404 (i think by nginx ingress controller)
This is the log of nginx pod:
x.x.x.x - - [09/Mar/2021:17:02:21 +0000] "GET /test1 HTTP/1.1" 200 68 "-" "PostmanRuntime/7.26.8" 234 0.006 [default-helloworld1-8080] [] 192.168.168.92:8080 68 0.008 200
x.x.x.x - - [09/Mar/2021:17:02:36 +0000] "GET /geo HTTP/1.1" 404 116 "-" "PostmanRuntime/7.26.8" 232 0.013 [default-geojson-example-8080] [] 192.168.168.109:8080 116 0.012 404
What can I do?
As far the doc: This annotation is of the form nginx.ingress.kubernetes.io/default-backend: <svc name> to specify a custom default backend. This <svc name> is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend.
This service will be handle the response when the service in the Ingress rule does not have active endpoints.
You cannot use same service as default backend and also for a path. When you do this the path /geo became invalid. As we know default backend serves only the inactive endpoints. Now If you tell that you want geojson-example as default backend(for inactive endpoints) again in the paths if you tell that use geojson-example for a valid path /geo then it became invalid as you are creating a deadlock type situation here.
You actually do not need to give this nginx.ingress.kubernetes.io/default-backend annotation.
Your ingress should be like below without the default annotation, or you can use the annotation but in that case you need to remove geojson-example from using for any valid path in the paths, or need to use another service for the path /geo. Options that you can use are given below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /geo
pathType: Prefix
backend:
service:
name: geojson-example
port:
number: 8080
- path: /test1
pathType: Prefix
backend:
service:
name: helloworld1
port:
number: 8080
- path: /test2
pathType: Prefix
backend:
service:
name: helloworld2
port:
number: 8080
Or:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/default-backend: geojson-example
spec:
rules:
- http:
paths:
- path: /geo
pathType: Prefix
backend:
service:
name: <any_other_service> # here use another service except `geojson-example`
port:
number: 8080
- path: /test1
pathType: Prefix
backend:
service:
name: helloworld1
port:
number: 8080
- path: /test2
pathType: Prefix
backend:
service:
name: helloworld2
port:
number: 8080
Or:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/default-backend: geojson-example
spec:
rules:
- http:
paths:
- path: /test1
pathType: Prefix
backend:
service:
name: helloworld1
port:
number: 8080
- path: /test2
pathType: Prefix
backend:
service:
name: helloworld2
port:
number: 8080
This is for your default backend. You set the geojson-example service as a default backend.
The default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand (i.e., all the requests that are not mapped with an Ingress).
Basically a default backend exposes two URLs:
/healthz that returns 200
/ that returns 404
So , if you want geojson-example service as a default backend then you don't need /geo path specification. Then your manifest file will be:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/default-backend: geojson-example
spec:
rules:
- http:
paths:
- path: /test1
pathType: Prefix
backend:
service:
name: helloworld1
port:
number: 8080
- path: /test2
pathType: Prefix
backend:
service:
name: helloworld2
port:
number: 8080
Or if you want geojson-example as a ingress valid path then you have to remove default backend annotation. Then your manifest file will be:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /geo
pathType: Prefix
backend:
service:
name: geojson-example
port:
number: 8080
- path: /test1
pathType: Prefix
backend:
service:
name: helloworld1
port:
number: 8080
- path: /test2
pathType: Prefix
backend:
service:
name: helloworld2
port:
number: 8080

https redirects to http and then to https

I have an application running inside EKS. Istio is used as a ServiceMesh. I am having some problem with https redirect to http and then to https. It looks problem is at istio virtual service, it momentarily switches to http which I want to prevent.
This is how we installed istio [Installed version is 1.5.1]
istioctl -n infrastructure manifest apply \
--set profile=default --set values.kiali.enabled=true \
--set values.gateways.istio-ingressgateway.enabled=true \
--set values.gateways.enabled=true \
--set values.gateways.istio-ingressgateway.type=NodePort \
--set values.global.k8sIngress.enabled=false \
--set values.global.k8sIngress.gatewayName=ingressgateway \
--set values.global.proxy.accessLogFile="/dev/stdout"
This is our virtual service. Cluster contains two deployments:
myapps-front
myapps-api
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: dev-sanojapps-virtual-service
namespace: istio-system
spec:
hosts:
- "dev-mydomain.com"
gateways:
- ingressgateway
http:
- match:
- uri:
prefix: /myapp/
- uri:
prefix: /myapp
rewrite:
uri: /
route:
- destination:
host: myapp-front.sanojapps-dev.svc.cluster.local
headers:
request:
set:
"X-Forwarded-Proto": "https"
"X-Forwarded-Port": "443"
response:
set:
Strict-Transport-Security: max-age=31536000; includeSubDomains
- match:
- uri:
prefix: /v1/myapp-api/
- uri:
prefix: /v1/myapp-api
rewrite:
uri: /
route:
- destination:
host: myapp-api.sanojapps-dev.svc.cluster.local
port:
number: 8080
- match:
- uri:
prefix: /
redirect:
uri: /myapp/
https_redirect: true
headers:
request:
set:
"X-Forwarded-Proto": "https"
"X-Forwarded-Port": "443"
response:
set:
Strict-Transport-Security: max-age=31536000; includeSubDomains
Below is front-end apps yaml deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-front
namespace: sanojapps-dev
labels:
app: myapp-front
spec:
selector:
matchLabels:
app: myapp-front
template:
metadata:
labels:
app: myapp-front
spec:
containers:
- name: myapp-front
image: <ECR_REPO:TAG>
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: http
protocol: TCP
resources:
limits:
cpu: 500m
memory: 1024Mi
requests:
cpu: 50m
memory: 256Mi
Our gateway is configured like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: istio-system
name: sanojapps-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/subnets: ""
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:region:account:certificate/<ACM_ID>
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-FS-1-2-2019-08
alb.ingress.kubernetes.io/wafv2-acl-arn: arn:aws:wafv2:region:account:regional/webacl/sanojapps-acl/<ACM_ID>
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: ssl-redirect
servicePort: use-annotation
- path: /*
backend:
serviceName: istio-ingressgateway
servicePort: 80

Why the certificate is not recognized by the ingress?

I have installed on my K8S https://cert-manager.io and have created cluster issuer:
apiVersion: v1
kind: Secret
metadata:
name: digitalocean-dns
namespace: cert-manager
data:
# insert your DO access token here
access-token: secret
---
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
email: mail#example.io
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: secret
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: access-token
selector:
dnsNames:
- "*.tool.databaker.io"
#- "*.service.databaker.io"
---
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: mail#example.io
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: secret
solvers:
- dns01:
digitalocean:
tokenSecretRef:
name: digitalocean-dns
key: access-token
selector:
dnsNames:
- "*.tool.databaker.io"
also have created a certificate:
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: hello-cert
spec:
secretName: hello-cert-prod
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: "*.tool.databaker.io"
dnsNames:
- "*.tool.databaker.io"
and it was successfully created:
Normal Requested 8m31s cert-manager Created new CertificateRequest resource "hello-cert-2824719253"
Normal Issued 7m22s cert-manager Certificate issued successfully
To figure out, if the certificate is working, I have deployed a service:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.7
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: hello.tool.databaker.io
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
---
But it does not work properly.
What am I doing wrong?
You haven't specified the secrets containing your certificate:
spec:
tls:
- hosts:
- hello.tool.databaker.io
secretName: <secret containing the certificate>
rules:
...

Resources