VirtualService routing only uses one host - networking

I have the following VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: external-vs
namespace: dev
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- name: "postauth"
match:
- uri:
exact: /postauth
route:
- destination:
port:
number: 8080
host: postauth
- name: "frontend"
match:
- uri:
exact: /app
route:
- destination:
port:
number: 8081
host: sa-frontend
I would expect that calls to the /postauth endpoint would be routed to the postauth service and calls to the /app endpoint would be routed to the sa-frontend service. What is happening is that all calls end up being routed to the first router in the file, in the above case to postauth, but if I change the order it will be to sa-frontend
All services and deployments are in the same namespace (dev).
Is that somehow the expected behaviour? My interpretation is that the above should only allow calls to the /postauth and /app endpoints and nothing else, and route these to their respective services.

As per documentaion for Istio 1.3 in HTTPMatchRequest you can find
Field: name, Type: string
I have compared those settings between 1.1 and 1.3 versions:
In version 1.3.4 this paramereter is working properly and the routes were propagated with the names:
[
{
"name": "http.80",
"virtualHosts": [
{
"name": "*:80",
"domains": [
"*",
"*:80"
],
"routes": [
{
"name": "ala1",
"match": {
"prefix": "/hello1",
"caseSensitive": true
},
"route": {
"cluster": "outbound|9020||hello1.default.svc.cluster.local",
.
.
.
{
"name": "ala2",
"match": {
"prefix": "/hello2",
"caseSensitive": true
},
"route": {
"cluster": "outbound|9030||hello2.default.svc.cluster.local",
While in version 1.1 it's not working properly. In those cases please verify your settings with appropriate release.
In addition please refer to Troubleshooting section.
You can verify your applied configuration (changes) inside the cluster, f.e.:
How Envoy instance was configured:
istioctl proxy-config cluster -n istio-system your_istio-ingressgateway-name
Verify routes configuration and virtual hosts for services:
istioctl proxy-config routes -n istio-system your_istio-ingressgateway-name -o json
Hope this help.

Related

disabling discovery for k8s api client

Right now i used the way from the 1st answer from:
Cannot read configmap with name: [xx] in namespace ['default'] Ignoring
But in application logs:
2022-04-19 14:14:57.660 [controller-reflector-io.kubernetes.client.openapi.models.V1ConfigMap-1] [] INFO i.k.c.informer.cache.ReflectorRunnable - class io.kubernetes.client.openapi.models.V1ConfigMap#Start listing and watching...
2022-04-19 14:14:57.662 [controller-reflector-io.kubernetes.client.openapi.models.V1ConfigMap-1] [] ERROR i.k.c.informer.cache.ReflectorRunnable - class io.kubernetes.client.openapi.models.V1ConfigMap#Reflector loop failed unexpectedly
io.kubernetes.client.openapi.ApiException:
at io.kubernetes.client.openapi.ApiClient.handleResponse(ApiClient.java:974)
at io.kubernetes.client.openapi.ApiClient.execute(ApiClient.java:886)
at io.kubernetes.client.informer.SharedInformerFactory$1.list(SharedInformerFactory.java:207)
at io.kubernetes.client.informer.cache.ReflectorRunnable.run(ReflectorRunnable.java:88)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
And its works properly and reading configmaps mounted in deployment.
How can i disable the all another features?
Using implementation("org.springframework.cloud:spring-cloud-starter-kubernetes-client-config:2.1.1")
In bootstrap.yml:
spring:
application:
name: toast
cloud:
vault:
enabled: false
kubernetes:
reload:
enabled: true
mode: event
strategy: refresh
config:
sources:
- name: ${spring.application.name}-common
- name: ${spring.application.name}
enabled: true
paths:
#- { { .Values.application } }-common-config/data.yml
#- { { .Values.application } }-config/application.yml
- /etc/${spring.application.name}-common/config/application.yml
- /etc/${spring.application.name}/config/data.yml
enabled: true
I need can use it without rbac resource in k8s.

Block particular path on ingress-nginx Loadbalancer

I have many domain pointing to Ingress Controller IP. I want to block /particular-path for all the domains/sites. Is there a way to do this.
I can use nginx.ingress.kubernetes.io/configuration-snippet: | for each site. But looking for way to do for all sites/domains/Ingress resource at once.
Controller used: https://kubernetes.github.io/ingress-nginx/
There are two ways to achieve this:
1. First one is with using server-snippet annotation:
Using the annotation nginx.ingress.kubernetes.io/server-snippet it
is possible to add custom configuration in the server configuration
block.
Here is my manifest for the ingress object:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
location ~* /admin-access {
deny all;
return 403;
}
spec:
rules:
- host: domain.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 80
Please note that using this approach :
This annotation can be used only once per host.
2. Second one is with usage of ConfigMaps and Server-snippet:
What you have to do is to locate your configMap:
kubectl get pod <nginx-ingress-controller> -o yaml
This is located the container args:
spec:
containers:
- args:
- /nginx-ingress-controller
- configmap=$(POD_NAMESPACE)/nginx-loadbalancer-conf
And then just edit it and place add the server-snippet part:
apiVersion: v1
data:
server-snippet: |
location /admin-access {
deny all;
}
This approach allows you to define restricted location globally for all host defined in Ingress resource.
Please note that with usage of server-snippet the path that you are blocking cannot be defined in ingress resource object. There is however another way with location-snippet via ConfigMap:
location ~* "^/web/admin {
deny all;
}
With this for every existing path in ingress object there will be ingress rule but it will be blocked for specific uri (In the example above it be be blocked when admin will appear after web). All of the other uri will be passed through.
3. Here`s a test:
➜ curl -H "Host: domain.com" 172.17.0.4/test
...
"path": "/test",
"headers": {
...
},
"method": "GET",
"body": "",
"fresh": false,
"hostname": "domain.com",
"ip": "172.17.0.1",
"ips": [
"172.17.0.1"
],
"protocol": "http",
"query": {},
"subdomains": [],
"xhr": false,
"os": {
"hostname": "web-6b686fdc7d-4pxt9"
...
And here is a test with a path that has been denied:
➜ curl -H "Host: domain.com" 172.17.0.4/admin-access
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.19.0</center>
</body>
</html>
➜ curl -H "Host: domain.com" 172.17.0.4/admin-access/test
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.19.0</center>
</body>
</html>
Additional information: Deprecated APIs Removed In 1.16. Here’s What You Need To Know:
The v1.22 release will stop serving the following deprecated API
versions in favor of newer and more stable API versions:
Ingress in the extensions/v1beta1 API version will no longer be
served
You cannot block specific paths. What you can do is point the path of the host inside your ingress to a default backedn application that says 404 default backedn for example.
you can apply it using the ingress annotation
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: channel-dev
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: "27.110.30.45, 68.50.85.421"
name: dev-ingress
namespace: development
spec:
rules:
- host: hooks.dev.example.com
http:
paths:
- backend:
serviceName: hello-service
servicePort: 80
path: /incoming/message/
tls:
- hosts:
- hooks.dev.example.com
secretName: channel-dev
path https://hooks.dev.example.com/incoming/message/ will be only accessible from mentioned IPs other users will get 403 error and wont be able to access the URL.
just add this annotation in ingress
nginx.ingress.kubernetes.io/whitelist-source-range

How to debug ingress-controller connections with a single IP by ConfigMap

We are trying to edit our ingress-nginx.yml to make ingress-controllers pods debug traffic coming from a specific source IP.
Our setup is:
Kubernetes v1.13
Ingress-Controller v0.24.1
From NGINX and Kubernetes DOCs it appears there is no very easy way to debug traffic from a single ip (you cannot edit the nginx config directly). So, we would like to add the debug_connection directive to appear like this:
error_log /path/to/log;
...
events {
debug_connection 192.168.1.1;
}
The correct way to do it shall be through CustomAnnotations in a ConfigMap + a new ingress to enable the CustomAnnotation, so we tried this:
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
data:
ingress-template: |
#Creating the custom annotation to make debug_connection on/off
{if index $.Ingress.Annotations "custom.nginx.org/debug_connection"}
{$ip := index $.Ingress.Annotations "custom.nginx.org/ip"}
{end}
{range $events := .Events}
events {
# handling custom.nginx.org/debug_connection
{if index $.Ingress.Annotations "custom.nginx.org/debug_connection"}
{end}
And:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: debugenabler
annotations:
kubernetes.io/ingress.class: "nginx"
custom.nginx.org/debug_connection: "on"
custom.nginx.org/ip: "192.168.1.1"
spec:
rules:
- host: "ourhostname"
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
We applied ingress-nginx.yml with no errors. We see new lines in the nginx conf:
location /coffee {
set $namespace "test";
set $ingress_name "debugenabler";
set $service_name "coffee-svc";
set $service_port "80";
set $location_path "/coffee";
rewrite_by_lua_block {
lua_ingress.rewrite({
force_ssl_redirect = true,
use_port_in_redirects = false,
})
balancer.rewrite()
But still nothing as regard the debug_connection in the events block:
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
How to insert debug_connection in the events context ?
For those who may face similar challenges, I actually managed to do it by:
Creating a ConfigMap with a new ingress-controller template file (nginx.tmpl) containing the debug_connection line (double check your ingress-controller version here, the file changes dramatically)
Creating a Volume which links at the Configmap (specifying Volume and Volumemount)
Creating a InitContainer which copy the content of the volume inside the /etc/nginx/template (this was needed to overcome probably permission-related issues) before the container start.
For point 2 and 3 you can add the relevant code at the end of a deployment or a pod code, I share an example:
volumes:
- name: nginxconf2
configMap:
name: nginxconf2
items:
- key: nginx.tmpl
path: nginx.tmpl
initContainers:
- name: copy-configs
image: {{ kubernetes.ingress_nginx.image }}
volumeMounts:
- mountPath: /nginx
name: nginxconf2
command: ['sh', '-c', 'cp -R /nginx/ /etc/nginx/template/']

nginx-ingress: Too many redirects when force-ssl is enabled

I am setting up my first ingress in kubernetes using nginx-ingress. I set up the ingress-nginx load balancer service like so:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "ingress-nginx",
"namespace": "...",
"labels": {
"k8s-addon": "ingress-nginx.addons.k8s.io"
},
"annotations": {
"service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "tcp",
"service.beta.kubernetes.io/aws-load-balancer-proxy-protocol": "*",
"service.beta.kubernetes.io/aws-load-balancer-ssl-cert": "arn....",
"service.beta.kubernetes.io/aws-load-balancer-ssl-ports": "443"
}
},
"spec": {
"ports": [
{
"name": "http",
"protocol": "TCP",
"port": 80,
"targetPort": "http",
"nodePort": 30591
},
{
"name": "https",
"protocol": "TCP",
"port": 443,
"targetPort": "http",
"nodePort": 32564
}
],
"selector": {
"app": "ingress-nginx"
},
"clusterIP": "...",
"type": "LoadBalancer",
"sessionAffinity": "None",
"externalTrafficPolicy": "Cluster"
},
"status": {
"loadBalancer": {
"ingress": [
{
"hostname": "blablala.elb.amazonaws.com"
}
]
}
}
}
Notice how the https port has its targetPort property pointing to port 80 (http) in order to terminate ssl at the load balancer.
My ingress looks something like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: something
namespace: ...
annotations:
ingress.kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
rules:
- host: www.exapmle.com
http:
paths:
- path: /
backend:
serviceName: some-service
servicePort: 2100
Now when I navigate to the url I get a Too many redirects error. Something that is confusing me is that when I add the following header "X-Forwarded-Proto: https" I get the expected response (curl https://www.example.com -v -H "X-Forwarded-Proto: https").
Any ideas how I can resolve the issue?
P.S. this works just fine with ingress.kubernetes.io/force-ssl-redirect: "false" and it doesn't seem that there are any extraneous redirects.
That is a known issue with the annotation for SSL-redirection in combination with proxy-protocol and termination of SSL connections on ELB.
Question about it was published on GitHub and here is a fix from that thread:
You should create a custom ConfigMap for an Nginx-Ingress instead of using force-ssl-redirect annotation like the following:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: ingress-nginx
name: nginx-ingress-configuration
namespace: <ingress-namespace>
data:
ssl-redirect: "false"
hsts: "true"
server-tokens: "false"
http-snippet: |
server {
listen 8080 proxy_protocol;
server_tokens off;
return 301 https://$host$request_uri;
}
That configuration will create an additional listener with a simple redirection to https.
Then, apply that ConfigMap to your ingress-controller, add NodePort 8080 to its container definition and to the Service.
Now, you can point the port 80 of your ELB to port 8080 of the Service.
With that additional listener, it will work.
Adding another cause for the Too many redirects error.
While working with ingress-nginx as an ingress controller in front of some k8s services.
One of the services (ArgoCD in my case) handled TLS termination by itself and always redirects HTTP requests to HTTPS.
The problem is that the nginx ingress controller also handled TLS termination and communicates with the backend service with HTTP then the result is that the ArgoCD's server always responding with a redirects to HTTPS which is the cause for the multiple redirects.
Any attempts to pass relevant values to the ingress annotations below will not help:
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: false/true
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"/"HTTPS"
The solution was to ensure that the service doesn't handle TLS by passing --insecure flag to the argocd-server deployment:
spec:
template:
spec:
containers:
- name: argocd-server
command:
- argocd-server
- --repo-server
- argocd-repo-server:8081
- --insecure # <-- Here
I had to add these annotations to make it work without changing the ingress-controller:
annotations:
kubernetes.io/ingress.class: nginx-ingress-internal # <- AWS NLB
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($http_x_forwarded_proto = 'http') {
return 301 https://$host$request_uri;
}
Another approach that worked for my environment (k8s v1.16.15, rancher/nginx-ingress-controller:nginx-0.32.0-rancher1):
apiVersion: v1
data:
compute-full-forwarded-for: "true"
use-forwarded-headers: "true"
kind: ConfigMap
metadata:
labels:
app: ingress-nginx
name: nginx-configuration
namespace: ingress-nginx
This worked with the force-ssl-redirect on the ingress of the application. It seems that the ingress-controller does not use the X-Forwarded-Proto header from the ELB out of the box.
I had this issues in Keycloak setup via helm chart as well. The SSL termination is done on ELB so to fix it . I made the following changes in helm values.
ingress:
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
This helped me fixed it.

How to change default swagger.json file path?

I have created Ngnix-Consul Docker setup referred https://github.com/nginxinc/NGINX-Demos/tree/master/consul-template-demo.
And have created many microservices. So All the microservices are accessible only after adding the service name for e.g.
http://example.com/service_name/get_data
All is working fine then I wanted to add swagger for all microservices so tried with below snippet
I am able to access swagger ui by using
http://example.com/service_name/ui
But the problem is I am not able to load swagger.json in that ui as its trying to access swagger.json on below url
http://example.com/swagger.json
but the json file is on
http://example.com/service_name/swagger.json
How can I change the default path of swagger.json?
The applications in microservices are created in python-flask
I have tried below snippet
swagger: "2.0"
info:
description: "Add service"
version: "1.0.0"
title: "Add Service"
contact:
email: "abc#efg.com"
license:
name: "s1.0"
url: "http://sample.com"
host: "abc.efg.com"
tags:
- name: "add service"
description: "service"
- name: "delete service"
description: "data"
schemes:
- "http"
paths:
/service_name/get_data:
and even I have tried to add basePath in the swagger.yaml file
then It did not even open swaggerui
swagger: "2.0"
info:
description: "Add service"
version: "1.0.0"
title: "Add Service"
contact:
email: "abc#efg.com"
license:
name: "s1.0"
url: "http://sample.com"
host: "abc.efg.com"
basePath: "service_name"
tags:
- name: "add service"
description: "service"
- name: "delete service"
description: "data"
schemes:
- "http"
paths:
/get_data:
Update:
from flask import Flask
import connexion
app = Flask(__name__)
app = connexion.App(__name__)
app.add_api('swagger.yaml')
//apis
if __name__ == "__main__":
app.run(host='0.0.0.0', port=8090, debug=True)
Had similar problem myself. The solution for me was to disable path rewrite in the NGINX level, so that the microservice would receive the full url:
Before:
Request:
http://example.com/service_name/get_data
Service sees:
/get_data
After:
Request:
http://example.com/service_name/get_data
Service sees:
/service_name/get_data
Only then you can specify basePath as "service_name" in the swagger.yaml file:
swagger: "2.0"
info:
description: "Add service"
version: "1.0.0"
title: "Add Service"
host: "abc.efg.com"
basePath: "service_name"
If file swagger.json is static in NGINX config you can make alias rule like this:
location ^~ /swagger.json {
alias /path_to/swagger.json;
}

Resources