Is there a way to integrate the Apigee Envoy Adapter with a 3rd party JWT IdP such as Auth0? - apigee

I have an Auth0 tenant and I'm trying to integrate it with Apigee Envoy Adapter and use the JWT tokens issued by it to authenticate the API Calls targeting my backend behind the Apigee Envoy Adapter.
My envoy config file is defined as follows:
`
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This is for Envoy 1.16+.
admin:
access_log_path: /tmp/envoy_admin.log
address:
socket_address:
address: 127.0.0.1
port_value: 9000
static_resources:
listeners:
- address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
route_config:
virtual_hosts:
- name: default
domains: "*"
routes:
- match: { prefix: / }
route:
cluster: httpbin
http_filters:
# evaluate JWT tokens, allow_missing allows API Key also
- name: envoy.filters.http.jwt_authn
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.jwt_authn.v3.JwtAuthentication
providers:
auth0:
issuer: https://dev-y8zk4.us.auth0.com/
audiences:
- remote-service-client
remote_jwks:
http_uri:
uri: https://dev-y8zk4.us.auth0.com/.well-known/jwks.json
cluster: apigee-auth-service
timeout: 5s
cache_duration:
seconds: 300
payload_in_metadata: https://dev-y8zk4.us.auth0.com
rules:
- match:
prefix: /
requires:
# provider_name: auth0
requires_any:
requirements:
- provider_name: auth0
- allow_missing: {}
# evaluate Apigee rules
- name: envoy.filters.http.ext_authz
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
transport_api_version: V3
grpc_service:
envoy_grpc:
cluster_name: apigee-remote-service-envoy
timeout: 1s
metadata_context_namespaces:
- envoy.filters.http.jwt_authn
- name: envoy.filters.http.router
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
access_log:
# collect Apigee analytics
- name: envoy.access_loggers.http_grpc
typed_config:
"#type": type.googleapis.com/envoy.extensions.access_loggers.grpc.v3.HttpGrpcAccessLogConfig
common_config:
transport_api_version: V3
grpc_service:
envoy_grpc:
cluster_name: apigee-remote-service-envoy
log_name: apigee-remote-service-envoy
clusters:
# define cluster for httpbin.org target
- name: httpbin
connect_timeout: 2s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: httpbin
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: "httpbin.org"
port_value: 443
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
sni: "httpbin.org"
# define cluster for Apigee remote service
- name: apigee-remote-service-envoy
type: LOGICAL_DNS
http2_protocol_options: {}
load_assignment:
cluster_name: apigee-remote-service-envoy
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: "localhost"
port_value: 5000
common_lb_config:
healthy_panic_threshold:
value: 50.0
health_checks:
- timeout: 1s
interval: 5s
interval_jitter: 1s
no_traffic_interval: 5s
unhealthy_threshold: 1
healthy_threshold: 3
grpc_health_check: {}
connect_timeout: 0.25s
# define cluster for Apigee JWKS certs
- name: apigee-auth-service
connect_timeout: 2s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: apigee-auth-service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: "dev-y8zk4.us.auth0.com"
port_value: "443"
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"#type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
# sni: "dev-y8zk4.us.auth0.com"
`
The apigee remote service config file is as follows:
`
# Configuration for apigee-remote-service-envoy (platform: GCP)
# generated by apigee-remote-service-cli provision on 2022-10-26 15:50:10
apiVersion: v1
kind: ConfigMap
metadata:
name: apigee-remote-service-envoy
namespace: apigee
data:
config.yaml: |
tenant:
remote_service_api: https://34.102.185.252.nip.io/remote-service
org_name: apigeex-poc
env_name: eval
analytics:
collection_interval: 10s
auth:
jwt_provider_key: https://dev-y8zk4.us.auth0.com/
append_metadata_headers: true
---
apiVersion: v1
kind: Secret
metadata:
name: apigeex-poc-eval-policy-secret
namespace: apigee
type: Opaque
data:
remote-service.crt: eyJrZXlzIjpbeyJhbGciOiJSUzI1NiIsImUiOiJBUUFCIiwia2lkIjoiMjAyMi0wOS0xMlQyMjoxNjo0M1oiLCJrdHkiOiJSU0EiLCJuIjoicmU5RXVZYndZdi16ZVZxc0stLVljUXUxcGtESnNheUxYLUpBY...
remote-service.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBcmU5RXVZYndZdit6ZVZxc0srK1ljUXUxcGtESnNheUxYK0pBY0VYZ0RjYUlEYktlCktmSjAxeVgrcXVGcUdiTGdIZDh5LzlaWmpQbW9rcGhoTnBZSVY3d09xdXp3Myt4MFNKUjBuQlBUUlZGejBvZnEKMWE4bnlmNUIyQlVyTUVhSldCUS9TUG52aURWWlNvYytxZzkwaTNCcTc3VXhpUXRVZU04VGQ1U2t1ZExqLzhabQpNNHo4WWpGWGJ2TmtCT05hdm9pOUlHUSsycGh2ZW1VNmkvdWFwRHdueVNpaHRodHR1WGQ3d1lIckdEaWJQejJ3CldPSjVQdk1OWVUxdG1WNFVuZTdGOVJQOFVlZU12VmpNRCtLd0pBZHhudCtaa3hJUU1xRmpQd1VRUmIxWHEydGEKY0d4cXJkRURWTElHRmlHLzhwZW1YcDU4clY3MzVwMDN2ZGsybFFJREFRQUJBb0lCQVFDRkU1SERXT2pHOWRoOQpPdENMOVE3dlB3UkdKVCtyL3RYTUVMRTR0VjJOYko1ZnpJK1NqSHkwdDg4M01xd1k2WERycWYrOUdtVDlwVzVDCmF1L0Y4ZGlFTjlBSkdxdll4M0xZclN6aTFaQkpjdDVvNzdET3VPcDZjMXd4VlZEcjZLdmdoZTA5aW15b0RCazcKR21ycGRsVzI4ZFgvZk9SZmRCbTNMTWc2TDdJd3NWUWUzZlg5VVFLY29ydXFYSlZzMDRpSis2TmxEWkRQRjRnbApYajRlTXFWMEEvbi9jdU9WRnVPMGhQWml5eS9iek9ZbGJIWmYyOEVyb1FHQ1ZyMmRYbCthbW5Rc2FkUTltQjVWCjFDVjFaWGk5M09QeENGemNFRUg3cjJWc0pYUGlVbEN6V0FNcFF2OHVtbk9GNUdHNkdRRmdOdWZseHc2VHZnVS8KUFdkcFc0akJBb0dCQU9MUW5UbnRtS1RDdHlOZ2c3UnBMcWU3RjNtczZXeDByY21ZZ0tVZGFFUEtyQlZDb1pibwptT0xOS0ZXREhkRWZmSWs4Rm5sbEZRZGdjQU1laDBqVTh1M0E2NCtwMDZUeFVmalZvZHZOdzd1bU9sWUZrSEZLCm5BamZrcnB2UDZPZzBLbTlqT3BwM0pTOGxKRllNR1I5YXZ2SitmS3hwWEhkQ09YdHR4MWdHSU5aQW9HQkFNUlEKd1M0RFNCKzRNNWU5cFVPaFFYaXJsZ2xXTmN2VVBxWEdrNVVjOXg5NDQwLzMwR1NNSUdGWXp0QWEzbjlVeUcrbgozN3A5STh0Q2FsZ1ZERUNweEpYQ2tOV3U2U0xTL3lqc3ZHZXdRSVNWRGJRZStEdE4wTzA4VWc1OElOKzd5OUN3CmJQaHE3TXRkbzFJSXpRZjlj...
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
org: apigeex-poc
name: apigee-remote-service-envoy
namespace: apigee
`
I registered the API (remote-service-client) on my Auth0 tenant and got a token by curling the token endpoint of my tenant.
But when i curl the envoy on localhost:8080 passing the Authentication header with the token, it's returned the 403 Forbidden error. I validated the token on jwt.io debugger and it's correct and valid.
I don't know if I'm missing something in the envoy or apigee remote service configuration.
So the question is: Is it possible to use a third party IdP integrated with the Apigee Envoy Adapter so that it provides the JWT token and public certs to validate the signature?
If yes, how to correctly configure it on apigee remote service proxy so that it manage to authenticate and authorize the calls passing through it?

Related

Ambassador Edge Stack JWT filter with Firebase token not working

I'm trying to verify a Firebase generated JWT token with an Ambassador Edge Stack (datawire/edge-stack version 3.3.0) Filter.
The Firebase token is generated using a login/password authent mechanism on firebase, something like (in Python):
email=input("Enter email: ")
password=input("Enter password: ")
user = authentication.sign_in_with_email_and_password(email, password)
custom_token = auth.create_custom_token(user["localId"], additional_claims)
print("JWT Token :")
print(custom_token)
After the token is generated, I use it with a curl command such as:
curl -H "Authorization: Bearer $TOKEN" https://ambassador-ip.nip.io/hello-world/
and the curl command returns the following error:
},
"message": "Token validation error: token is invalid: errorFlags=0x00000002=(ValidationErrorUnverifiable) wrappedError=(KeyID=\"50***redacted***1\": JWK not found)",
"status_code": 401
}
Here is the ambassador Filter I've declared:
apiVersion: getambassador.io/v2
kind: Filter
metadata:
name: "firebase-filter"
namespace: ${kubernetes_namespace.hello_world.metadata[0].name}
spec:
JWT:
jwksURI: "https://www.googleapis.com/service_accounts/v1/metadata/x509/securetoken#system.gserviceaccount.com"
audience: "${local.project_id}"
issuer: "https://securetoken.google.com/${local.project_id}"
And the policy filter applied to my backend:
apiVersion: getambassador.io/v3alpha1
kind: FilterPolicy
metadata:
name: "firebase-filter-policy"
namespace: ${kubernetes_namespace.hello_world.metadata[0].name}
spec:
rules:
- host: "*"
path: "/hello-world/"
filters:
- name: "firebase-filter"
namespace: "${kubernetes_namespace.hello_world.metadata[0].name}"
For the record, the curl command with the same token works on a deployed hello-world Cloud Run with a GCP API gateway configured as follow:
swagger: '2.0'
info:
title: Example Firebase auth Gateway
description: API Gateway with firebase auth
version: 1.0.0
schemes:
- https
produces:
- application/json
securityDefinitions:
firebase:
authorizationUrl: ''
flow: implicit
type: oauth2
x-google-issuer: "https://securetoken.google.com/${project_id}"
x-google-jwks_uri: "https://www.googleapis.com/service_accounts/v1/metadata/x509/securetoken#system.gserviceaccount.com"
x-google-audiences: "${project_id}"
paths:
/v1/hello:
get:
security:
- firebase: []
description: Hello
operationId: hello
responses:
'200':
description: Success
x-google-backend:
address: 'https://hello-redacted-ew.a.run.app'
Any idea why the Ambassador filter is misconfigured ?
Ambassador JWT Filter needs the jwksURI to point to the Firebase secure token service account public keys and not the X509 certificates, therefore the Filter should be:
apiVersion: getambassador.io/v2
kind: Filter
metadata:
name: "firebase-filter"
namespace: ${kubernetes_namespace.hello_world.metadata[0].name}
spec:
JWT:
jwksURI: "https://www.googleapis.com/service_accounts/v1/jwk/securetoken#system.gserviceaccount.com"
audience: "${local.project_id}"
issuer: "https://securetoken.google.com/${local.project_id}"
This is working for Firebase tokens only. If you want to make this works with Custom tokens using some dedicated service account for example, you might need the jwksURI to point to your service account public keys, something like:
apiVersion: getambassador.io/v2
kind: Filter
metadata:
name: "firebase-custom-filter"
namespace: ${kubernetes_namespace.hello_world.metadata[0].name}
spec:
JWT:
jwksURI: "https://www.googleapis.com/service_accounts/v1/jwk/${service_account}#${local.project_id}.iam.gserviceaccount.com"
audience: "${local.project_id}"
issuer: "https://securetoken.google.com/${local.project_id}"
The JWT Filter requires you to provide the url for the .well-known/openid-configuration so that it can verify the signature of the token. I'm not familiar with Firebase but looking on their docs it appears you can find this here:
https://firebase.google.com/docs/auth/web/openid-connect
For example your Filter should be configured something like the following (i'm guessing on the jwksURI):
apiVersion: getambassador.io/v2
kind: Filter
metadata:
name: "firebase-filter"
namespace: ${kubernetes_namespace.hello_world.metadata[0].name}
spec:
JWT:
jwksURI: "https://securetoken.google.com/${local.project_id}/.well-known/openid-configuration"
audience: "${local.project_id}"
issuer: "https://securetoken.google.com/${local.project_id}"

Istio - default ssl certificate to work with Azure Front Door

For nginx ingress, there is a way to define default-ssl-certificate with --default-ssl-certificate flag.
Ref: https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-ssl-certificate
How can I do the same for istio?
I have assigned tls.credentialName in istio gateway. But, it's not the same as nginx-ingress default-ssl-certificate.
istio_gateway.yaml
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: SERVICE_GATEWAY
spec:
selector:
istio: ingressgateway # Use Istio default gateway implementation
servers:
- port:
name: SERVICE_NAME-http-80
number: 80
protocol: HTTP
hosts:
- "SERVICE_DNS"
- port:
name: SERVICE_NAME-https-443
number: 443
protocol: HTTPS
tls:
credentialName: SERVICE_CRT
mode: SIMPLE
minProtocolVersion: TLSV1_2
hosts:
- "SERVICE_DNS"
VirtualService:
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: SERVICE_NAME
spec:
hosts:
- SERVICE_DNS
gateways:
- SERVICE_GATEWAY
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: SERVICE_PORT
host: "SERVICE_NAME.default.svc.cluster.local"
This setup is working for nginx-ingress: https://ssbkang.com/2020/08/17/end-to-end-tls-for-azure-front-door-and-azure-kubernetes-service/
I want to do the same thing with istio.

How to set sticky session for multiple services in kubernetes?

I have 2 services:
Restful/websocket API service with Nginx (2 replicas)
Daemon service (1 replica)
The daemon service will emit a websocket event to the frontend at some point. However, the event doesn't seem to be emitted successfully to the frontend from the daemon service.
I also tried to emit events from the API server to the frontend, and the event was successfully emitted to the front end. (maybe because the frontend is connected to the API WebSocket server).
What I have done for sticky-session:
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "daemon"
namespace: app
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
selector:
app: "daemon"
type: "NodePort"
sessionAffinity: ClientIP
---
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "api"
namespace: app
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 80
selector:
app: "api"
type: "NodePort"
sessionAffinity: ClientIP
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
annotations:
getambassador.io/resource-downloaded: '2020-03-30T16:10:34.466Z'
name: api
namespace: app
spec:
prefix: /api
service: api:80
load_balancer:
policy: ring_hash
cookie:
name: sticky-cookie
ttl: 60s
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
annotations:
getambassador.io/resource-downloaded: '2020-03-30T16:10:34.466Z'
name: api-ws
namespace: app
spec:
prefix: /private
service: api:80
use_websocket: true
load_balancer:
policy: ring_hash
cookie:
name: sticky-cookie
ttl: 60s
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
annotations:
getambassador.io/resource-downloaded: '2020-03-30T16:10:34.466Z'
name: api-daemon
namespace: app
spec:
prefix: /daemon
service: daemon:80
use_websocket: true
load_balancer:
policy: ring_hash
cookie:
name: sticky-cookie
ttl: 60s
From kubernetes.io DaemonSet docs:
Service: Create a service with the same Pod selector, and use the service to reach a daemon on a random node. (No way to reach specific node.)
So I think sessionAffinity cannot work with DaemonSet.

VirtualService routing only uses one host

I have the following VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: external-vs
namespace: dev
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- name: "postauth"
match:
- uri:
exact: /postauth
route:
- destination:
port:
number: 8080
host: postauth
- name: "frontend"
match:
- uri:
exact: /app
route:
- destination:
port:
number: 8081
host: sa-frontend
I would expect that calls to the /postauth endpoint would be routed to the postauth service and calls to the /app endpoint would be routed to the sa-frontend service. What is happening is that all calls end up being routed to the first router in the file, in the above case to postauth, but if I change the order it will be to sa-frontend
All services and deployments are in the same namespace (dev).
Is that somehow the expected behaviour? My interpretation is that the above should only allow calls to the /postauth and /app endpoints and nothing else, and route these to their respective services.
As per documentaion for Istio 1.3 in HTTPMatchRequest you can find
Field: name, Type: string
I have compared those settings between 1.1 and 1.3 versions:
In version 1.3.4 this paramereter is working properly and the routes were propagated with the names:
[
{
"name": "http.80",
"virtualHosts": [
{
"name": "*:80",
"domains": [
"*",
"*:80"
],
"routes": [
{
"name": "ala1",
"match": {
"prefix": "/hello1",
"caseSensitive": true
},
"route": {
"cluster": "outbound|9020||hello1.default.svc.cluster.local",
.
.
.
{
"name": "ala2",
"match": {
"prefix": "/hello2",
"caseSensitive": true
},
"route": {
"cluster": "outbound|9030||hello2.default.svc.cluster.local",
While in version 1.1 it's not working properly. In those cases please verify your settings with appropriate release.
In addition please refer to Troubleshooting section.
You can verify your applied configuration (changes) inside the cluster, f.e.:
How Envoy instance was configured:
istioctl proxy-config cluster -n istio-system your_istio-ingressgateway-name
Verify routes configuration and virtual hosts for services:
istioctl proxy-config routes -n istio-system your_istio-ingressgateway-name -o json
Hope this help.

Istio Virtual Service match uri and cookie not working

I have been trying to apply this virtual service yaml for my microservices:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nameko-notifyms
spec:
hosts:
- "*"
gateways:
- nameko-notifyms-gateway
http:
- match:
- headers:
cookie:
regex: "^(.*?;)?(user=joe)(;.*)?"
uri:
exact: /hello
route:
- destination:
host: nameko-notifyms
port:
number: 8000
Using the code block above, after curl-ing the uri, there is not traffic going into the pod.
If I comment out the information as shown in the code block below:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nameko-notifyms
spec:
hosts:
- "*"
gateways:
- nameko-notifyms-gateway
http:
- match:
# - headers:
# cookie:
# regex: "^(.*?;)?(user=joe)(;.*)?"
- uri:
exact: /hello
route:
- destination:
host: nameko-notifyms
port:
number: 8000
The traffic is directed to the pod, which can be shown in the image below:
Postman setting as below:
Hello the problem is old but still actual so here the solution :
The problem come from the regex the first block ; is not optional.
Here the corrected regex.
"^(.*;?)?(user=joe)(;.*)?"
Full detail here : https://regex101.com/r/CPv2kU/3

Resources