How to reference query parameters inside a Kong's plugin configuration? - query-string

I want to implement a simple redirect using Kong's request-termination & response-transformer plugins. The plugins mostly works, but I have an issue with handling query parameters.
My configuration for the plugins is similar to this (note: I'm using Kong in DB-less mode):
services:
- name: my-redirects
url: http://localhost:8080/
routes:
- name: redirect-route
paths:
- /context$
- /$
plugins:
- name: request-termination
config:
status_code: 301
message: Redirect to main page...
- name: response-transformer
config:
add:
headers:
- Location:/context/
I want to obtain the following behaviour:
User visits http://localhost:8000/ -> redirected to /context/
User visits http://localhost:8000/context -> redirected to /context/
User visits http://localhost:8000/?param=value -> redirected to /context/?param=value
User visits http://localhost:8000/context?param=value -> redirected to /context/?param=value
Basically both / and /context requests should be redirected to /context/ (with a final slash) but query parameters should be preserved.
How can I modify the configuration:
add:
headers:
- Location:/context/
to include query parameters matched in the request? I expect to be able to do something like:
add:
headers:
- Location:/context/$query_params

What we are using to solve this on kong running in kubernetes is to use an ingress Exact rule coupled with a pre-function plugin. It should be easy to adapt it to the non-k8s yaml config:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: foo-ingress-redirect
konghq.com/plugins: foo-redirect
spec:
...
rules:
- host: myhost.example
http:
paths:
- path: /foo/bar # redirect /foo/bar to /foo/bar/ via plugin above
pathType: Exact
backend:
...
and then plugin:
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
name: foo-redirect
plugin: pre-function
config:
access:
- kong.response.exit(301, '', {['Location'] = '/foo/bar/'})
You can also play with kong.request interface if you need to e.g. pass the original query params on the redirect.
It's a bit convoluted but also did not find a simpler way.

Related

Kong route and upstream server path

I have the following configuration of service with route in Kong:
- name: test
host: test-service
port: 80
protocol: http
path: /endpoint/
routes:
- name: test_route
strip_path: true
preserve_host: true
paths:
- /login
I am trying to understand the following behaviour:
when I access http://localhost/login, I will end up with the http://localhost/endpoint in my browser
however, when I try to access http://localhost/login/test/page, nothing will change and I am still in my browser on http://localhost/login/test/page and the upstream server served my request properly
My expectation was that using http://localhost/login/test/page, my browser will have eventually http://localhost/endpoint/test/page. Apparently I misunderstood it.
I understood that in this case, Kong will use the upstream path as /endpoint/test/page, because I have strip_path: true.
However, how it can be changed in case I want to end up with http://localhost/endpoint/test/page in case I will access http://localhost/login/test/page in my browser?

Enforce a domain pattern that a service can use

I have a multi-tenant Kubernetes cluster. On it I have an nginx reverse proxy with load balancer and the domain *.example.com points to its IP.
Now, several namespaces are essentially grouped together as project A and project B (according to the different users).
How, can I ensure that any service in a namespace with label project=a, can have any domain like my-service.project-a.example.com, but not something like my-service.project-b.example.com or my-service.example.com? Please keep in mind, that I use NetworkPolicies to isolate the communication between the different projects, though communication with the nginx namespace and the reverse proxy is always possible.
Any ideas would be very welcome.
EDIT:
I made some progress as have been deploying Gatekeeper to my GKE clusters via Helm charts. Then I was trying to ensure that only Ingress hosts of the form ".project-name.example.com" should be allowed. For this, I have different namespaces that each have labels "project=a" or similar and each of these should only allow to use ingress of the form ".a.example.com". Hence I need that project label information for the respective namespaces. I wanted to deploy the following resources
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequiredingress
spec:
crd:
spec:
names:
kind: K8sRequiredIngress
validation:
# Schema for the `parameters` field
openAPIV3Schema:
type: object
properties:
labels:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredingress
operations := {"CREATE", "UPDATE"}
ns := input.review.object.metadata.namespace
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
input.request.kind.kind == "Ingress"
not data.kubernetes.namespaces[ns].labels.project
msg := sprintf("Ingress denied as namespace '%v' is missing 'project' label", [ns])
}
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
input.request.kind.kind == "Ingress"
operations[input.request.operation]
host := input.request.object.spec.rules[_].host
project := data.kubernetes.namespaces[ns].labels.project
not fqdn_matches(host, project)
msg := sprintf("invalid ingress host %v, has to be of the form *.%v.example.com", [host, project])
}
fqdn_matches(str, pattern) {
str_parts := split(str, ".")
count(str_parts) == 4
str_parts[1] == pattern
}
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredIngress
metadata:
name: ns-must-have-gk
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Ingress"]
---
apiVersion: config.gatekeeper.sh/v1alpha1
kind: Config
metadata:
name: config
namespace: "gatekeeper-system"
spec:
sync:
syncOnly:
- group: ""
version: "v1"
kind: "Namespace"
However, when I try to setup everything in the cluster I keep getting:
kubectl apply -f constraint_template.yaml
Error from server: error when creating "constraint_template.yaml": admission webhook "validation.gatekeeper.sh" denied the request: invalid ConstraintTemplate: invalid data references: check refs failed on module {template}: errors (2):
disallowed ref data.kubernetes.namespaces[ns].labels.project
disallowed ref data.kubernetes.namespaces[ns].labels.project
Do you know how to fix that and what I did wrong. Also, in case you happen to know a better approach just let me know.
Alternative to other answer, you may use validation webhook to enfore by any parameter present in the request. Example, name,namespace, annotations, spec etc.
The validation webhook could be a service running in the cluster or External to cluster. This service would essentially make a logical decision based on the logic we put. For every request Sent by user, api server send a review request to the webhook and the validation webhook would either approve or reject the review.
You can read more about it here, more descriptive post by me here.
If you want to enforce this rule on k8s object such as configmap or ingress, I think you can use something like OPA
In Kubernetes, Admission Controllers enforce semantic validation of objects during create, update, and delete operations. With OPA you can enforce custom policies on Kubernetes objects without recompiling or reconfiguring the Kubernetes API server.
reference

Prometheus + nginx-exporter: collect only from <some_nginx_container_ip>:9113

Disclaimer: I find out what Prometheus is about a day ago.
I'm trying to use Prometheus with nginx exporter
I copy-pasted a config example from grafana dashboard and it works flawlessly with node-exporter, but, when I'm trying to adapt it to nginx-exporter, deployed in one pod with nginx server, Prometheus outputs lots of trash in Targets (all opened ports for all available IPs).
So, I wonder, how should I adapt job to output only a needed container (with its' name in labels, etc.)
- job_name: 'kubernetes-nginx-exporter'
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- api_servers:
- 'https://kubernetes.default.svc'
in_cluster: true
role: container
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- source_labels: [__meta_kubernetes_role]
action: replace
target_label: kubernetes_role
- source_labels: [__address__]
regex: '(.*):10250'
replacement: '${1}:9113'
target_label: __address__
The right workaround was to add annotations to deployment in template section:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9113'
and set role: pod in job_name: 'kubernetes-pods' (if not set).
That's it, your endpoints would be present only with ports you provided and with all needed labels.

How do I ignore the top level domain when using host to route to a subdomain in Symfony?

I have a website that I'm testing locally using domain.lc. The website has a subdomain sub.domain.lc that routes to a different controller, like this:
my_bundle:
host: sub.domain.lc
resource: "#MyBundle/Controller/"
type: annotation
prefix: /
The subdomain routes to my bundle because of the host, but I would like it to also route to this bundle using sub.domain.com. Is there a way to ignore the top level domain using the host?
You can use placeholder like this:
my_bundle:
resource: "#MyBundle/Controller/"
type: annotation
prefix: /
host: "{domain}"
defaults:
domain: "%domain%"
requirements:
domain: "sub\.domain\.(lc|com)"
But there is a problem with generating absolute URLs. It depends on where your app is currently running (loc or com) and you need to specify it as container parameter (parameters.yml is good place). Link to similar problem.
Although kba did provide an answer that helped me very much and I will in fact accept his answer as the right one, I want to show you what my route ended up looking like:
my_bundle:
host: sub.domain.{tld}
resource: "#MyBundle/Controller/"
type: annotation
prefix: /
requirements:
tld: lc|nl
defaults:
tld: nl
I didn't use the Service Container Parameter in my route, because I know my domain won't change. Only my top level domain can change.

Symfony2/Memcached integration

I was following a blog post (link no longer available) and added memcached to services.yml
parameters:
memcached.servers:
- { host: 127.0.0.1, port: 11211 }
services:
memcached:
class: Memcached
calls:
- [ addServers, [ %memcached.servers% ]]
Then in my controller:
$memcached = $this->get('memcached');
Looks very pretty. If I can get past 500 - You have requested a non-existent service "memcached"!
The test code from php connects to memcached without any troubles. Done cache:clear, cleared cache manually - doesn't help.
Where should I look?
Adrien was right in his comments - services.yml file isn't used by default. You have to explicitly include it in config.yml

Resources