I am trying to setup ISTIO Gateway with GRPC. I am using example from:https://github.com/h3poteto/istio-grpc-example.
This example does not contain Gateway. I added the Gateway:
kind: Gateway
metadata:
name: my-gateway
namespace: istio-grpc-example
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: grpc-wildcard
protocol: GRPC
hosts:
- "*"
and modified the VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: backend
namespace: istio-grpc-example
spec:
hosts:
- "backend"
gateways:
- my-gateway
http:
- match:
- port: 50051
route:
- destination:
host: backend
subset: v0
weight: 90
- destination:
host: backend
subset: v1
weight: 10
Is there somethig else I should do? I still cannot go through Gateway… Received an error when querying services endpoint.
Thank you!
As I mentioned in comments
Have you tried with wildcard hosts? * instead of backend?
You need to change virtual service hosts.
spec:
hosts:
- "backend"
to
spec:
hosts:
- "*"
And #Ondra add that other thing he changed was the gateway port number.
I changed the port number from 80 to 31400 and changed the host from "backend" to "*". Now it looks like everything is working. – Ondra
Related
I'm trying to proxy HTTP requests with specified URI prefix to an external HTTPS server.
The idea is to use ower internal Nexus Repository manager for NPM, but don't loosethe ability for 'npm audit' like this project does GitHub Project. It should be done with Istio instead of deploying an extra app.
I configured a virtual service and a service entry to route the traffic to the external service. So far it was not possible to convert an HTTP request to an HTTPS request. Is there any chance to do this?
Configuration:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-nexus
spec:
hosts:
- "test.com"
gateways:
- gateway-xy
http:
- match:
- uri:
prefix: /-/npm/v1/security/audits/
route:
- destination:
port:
number: 443
host: registry.npmjs.org
- route:
- destination:
port:
number: 80
host: nexus
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: npmjs-ext
spec:
hosts:
- registry.npmjs.org
ports:
- number: 443
name: tls
protocol: tls
resolution: DNS
location: MESH_EXTERNAL
Found a solution: You need to add an DestinationRule with TLS mode 'SIMPLE' to connect to an external HTTPS service.
The whole configuration for my issue with forwarding 'npm audit' requests to public 'registry.npmjs.org', if you are using a private Nexus Repository is:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs
spec:
hosts:
- "test.com"
gateways:
- gateway
http:
# Route to npm registry for audit
# Like this: https://github.com/chovyy/npm-audit-proxy
# See: https://istio.io/latest/blog/2019/proxy/
- match:
- uri:
prefix: /-/npm/v1/security
headers:
request:
set:
host: "registry.npmjs.org"
route:
- destination:
port:
number: 443
host: registry.npmjs.org
# This is for custom Nexus repositories: You need to rewrite the route, that the prefix of the repository URL is not forwarded to registry.npmjs.org
- match:
- uri:
prefix: /repository/npm-test-repo/-/npm/v1/security
rewrite:
uri: /-/npm/v1/security
headers:
request:
set:
host: "registry.npmjs.org"
route:
- destination:
port:
number: 443
host: registry.npmjs.org
- route:
- destination:
port:
number: 80
host: nexus
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: npmjs-ext
spec:
hosts:
- registry.npmjs.org
ports:
- number: 443
name: tls
protocol: TLS
resolution: DNS
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: npmjs-ext
spec:
host: registry.npmjs.org
trafficPolicy:
tls:
mode: SIMPLE
I've 2 different services in Kubernetes. My purpose is to open these services and application behind the services to the outside. The problem is that application index.html has a redirection and it causes requests' bypass host matching rules.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: app1-vs
namespace: customapp
spec:
hosts:
- mydomain.com
gateways:
- customapp-gateway
http:
- match:
- uri:
prefix: "/app1"
route:
- destination:
host: app1.customapp.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: app2-vs
namespace: customapp
spec:
hosts:
- mydomain.com
gateways:
- customapp-gateway
http:
- match:
- uri:
prefix: "/app2"
route:
- destination:
host: app2.customapp.svc.cluster.local
port:
number: 81
I mean, mydomain.com/app1/index.html forwards requests to mydomain.com/login page which out of matching rule. How can I figure it out?
My Gateway file is as
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway-secure
namespace: myapp
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
#caCertificates: /etc/istio/ingressgateway-ca-certs/kbundle.crt
hosts:
- "*"
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-gateway-service-secure
namespace:myapp
spec:
hosts:
- "sub.domaincom"
gateways:
- my-gateway-secure
http:
- route:
- destination:
host: my-mono
port:
number: 443
protocol: TCP
and my service file is
apiVersion: v1
kind: Service
metadata:
name: my-mono
namespace: myapp
labels:
tier: backend
spec:
selector:
app: my-mono
tier: backend
ports:
- port: 443
name: https
protocol: TCP
Deployment file is as
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-mono
namespace: myapp
spec:
replicas: 1
selector:
matchLabels:
app: my-mono
template:
metadata:
labels:
app: my-mono
spec:
containers:
- name: my-mono
image: myapacheimage
imagePullPolicy: Never
ports:
- containerPort: 443
when i access my service using gateway it says
Bad Request
Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.
Apache/2.4.38 (Debian) Server at 10.0.159.77 Port 443
i can confirm that apache is only listening on 443 and is properly configured
Your configuration uses the TLS termination on istio gateway. So the HTTPS traffic entering the istio ingress is decrypted to plain HTTP traffic before reaching Your service endpoint.
To fix this You need to configure HTTPS ingress access to an HTTPS service, i.e., configure an ingress gateway to perform SNI passthrough, instead of TLS termination on incoming requests.
You can find an example of Ingress Gateway without TLS Termination in istio documentation guide here.
Your Gateway and VirtualService should look something like this:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway-secure
namespace: myapp
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
hosts:
- "*"
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-gateway-service-secure
namespace:myapp
spec:
hosts:
- "sub.domaincom"
gateways:
- my-gateway-secure
tls:
- match:
- port: 443
sni_hosts:
- "sub.domaincom"
route:
- destination:
host: my-mono
port:
number: 443
Hope it helps.
I am in the process of changing from an Azure webservice to azure kubernetes to host an api. I have the solution working with nginx and oauth2_proxy and azure active directory. However the solution requires a cookie to function.
As this is an api and the external security will be managed by an AWS API Gateway with a custom authoriser. I would like for the API Gateway to authenticate using a bearer token only and not require a cookie.
I have my solution working and have been so far testing form postman. In postman I have the bearer token but cannot find a way to access without the cookie.
My application presently runs via aws api gateway and an azure app service with azure active directory. The aws api gateway custom authoriser does not require a cookie in this case.
I have the following configuration
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: oauth2-proxy
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: mydomain.com
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
tls:
- hosts:
- mydomain.com
secretName: tls-secret
------
# oauth2_proxy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- env:
- name: OAUTH2_PROXY_PROVIDER
value: azure
- name: OAUTH2_PROXY_AZURE_TENANT
value: mytennantid
- name: OAUTH2_PROXY_CLIENT_ID
value: my clientid
- name: OAUTH2_PROXY_CLIENT_SECRET
value: my client secret
- name: OAUTH2_PROXY_COOKIE_SECRET
value: my cookie secret
- name: OAUTH2_PROXY_HTTP_ADDRESS
value: "0.0.0.0:4180"
- name: OAUTH2_PROXY_UPSTREAM
value: "file:///dev/null"
image: machinedata/oauth2_proxy:latest
imagePullPolicy: IfNotPresent
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
-----
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
spec:
tls:
- hosts:
- mydomain.com
secretName: tls-secret
rules:
- host: mydomain.com
http:
paths:
- backend:
serviceName: mayapp
servicePort: 80
I would like to change this configuration so a cookie is no longer required. If this is not possible is there another way to achieve the same outcome?
Just drop the oauth part on kubernetes and make API Gateway validate the requests, it has the ability to do exactly what you need. You can secure your kubernetes to only accept requests from the API Gateway, so you don't need to protect your endpoint from other calls.
We are running GRPC services on Google Kubernetes Engine with Istio. We have done following setup for the request routing which is not working.
We are receiving following error while making GRPC call to service :
upstream connect error or disconnect/reset before headers
Please let me know if there is missing something or there is workaround.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: helloworld-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 50051
name: grpc
protocol: GRPC
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- "*"
gateways:
- helloworld-gateway
http:
- match:
- port: 50051
route:
- destination:
host: helloworld
port:
number: 50051