Using x-google-endpoints OpenAPI extension with multiple API versions in Cloud Endpoints - google-cloud-endpoints

I'm trying to deploy two API versions to Google Cloud Endpoints but end up facing configuration issues during the deployment.
API definition api-1.yaml looks like this:
swagger: "2.0"
info:
description: "API"
title: "API"
version: "1.0.0"
host: "api.endpoints.GCP_PROJECT.cloud.goog"
basePath: "/v1"
x-google-api-name: v1
x-google-endpoints:
- name: "api.endpoints.GCP_PROJECT.cloud.goog"
target: "IP_ADDRESS"
...
This works just fine if deployed alone. However if api-2.yaml is added:
swagger: "2.0"
info:
description: "API"
title: "API"
version: "2.0.0"
host: "api.endpoints.GCP_PROJECT.cloud.goog"
basePath: "/v2"
x-google-api-name: v2
x-google-endpoints:
- name: "api.endpoints.GCP_PROJECT.cloud.goog"
target: "IP_ADDRESS"
...
This leads to deployment error: OpenAPI spec is invalid. Multiple endpoint entries are defined in the extension \'x-google-endpoints\'. At most one entry is allowed.
Removing x-google-endpoints extension from one yaml file works but it leaves another yaml file as incomplete and thus, not optimal solution.
Could there be and issue with combining/validating yaml files during deployment? Can x-google-endpoints extension be used to define .cloud.goog domain for versioned API's?

There are two ways to do this:
1) version is in the domain name, such as v1-api.endpoints.GCP_PROJECT.cloud.goog.
You define and deploy two services; one for v1 and another for v2. Each has its own IP, proper service name and its own backend. This is most straightforward way and easy approach.
2) version is in the path, such as api.endpoints.GCP_PROJECT.cloud.goog/v1. You can only define and deploy one service. But you have two backends. This is tricky one. You can use x-google-backend extension in the OpenApi spec and deploy one Cloud Endpoint service.

You are using two ESP proxy as: v1_esp -> v1, v2_esp -> v2.
Each proxy has its own IP and you are trying to bind a domain name to two IPs. This is not supported.
My suggestion is to use one ESP as such:
esp -> v1 + v2 by using x-google-backend.
With the following:
In the openApi.yaml
paths:
/v1/path1:
...
x-google-backend:
address: v1_host
# do above for all your path/methods
/v2:
...
x-google-backend:
address: v2_host
# do above for all your path/methods
x-google-endpoints:
- name: "api.endpoints.GCP_PROJECT.cloud.goog"
target: "IP_ADDRESS"
add --enable_backend_routing and --rewrite to remove /v1 and /v2 prefix before sending the request to your backends.
We have not tested such deployment, but you can try it.

Related

Enforce a domain pattern that a service can use

I have a multi-tenant Kubernetes cluster. On it I have an nginx reverse proxy with load balancer and the domain *.example.com points to its IP.
Now, several namespaces are essentially grouped together as project A and project B (according to the different users).
How, can I ensure that any service in a namespace with label project=a, can have any domain like my-service.project-a.example.com, but not something like my-service.project-b.example.com or my-service.example.com? Please keep in mind, that I use NetworkPolicies to isolate the communication between the different projects, though communication with the nginx namespace and the reverse proxy is always possible.
Any ideas would be very welcome.
EDIT:
I made some progress as have been deploying Gatekeeper to my GKE clusters via Helm charts. Then I was trying to ensure that only Ingress hosts of the form ".project-name.example.com" should be allowed. For this, I have different namespaces that each have labels "project=a" or similar and each of these should only allow to use ingress of the form ".a.example.com". Hence I need that project label information for the respective namespaces. I wanted to deploy the following resources
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequiredingress
spec:
crd:
spec:
names:
kind: K8sRequiredIngress
validation:
# Schema for the `parameters` field
openAPIV3Schema:
type: object
properties:
labels:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredingress
operations := {"CREATE", "UPDATE"}
ns := input.review.object.metadata.namespace
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
input.request.kind.kind == "Ingress"
not data.kubernetes.namespaces[ns].labels.project
msg := sprintf("Ingress denied as namespace '%v' is missing 'project' label", [ns])
}
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
input.request.kind.kind == "Ingress"
operations[input.request.operation]
host := input.request.object.spec.rules[_].host
project := data.kubernetes.namespaces[ns].labels.project
not fqdn_matches(host, project)
msg := sprintf("invalid ingress host %v, has to be of the form *.%v.example.com", [host, project])
}
fqdn_matches(str, pattern) {
str_parts := split(str, ".")
count(str_parts) == 4
str_parts[1] == pattern
}
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredIngress
metadata:
name: ns-must-have-gk
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Ingress"]
---
apiVersion: config.gatekeeper.sh/v1alpha1
kind: Config
metadata:
name: config
namespace: "gatekeeper-system"
spec:
sync:
syncOnly:
- group: ""
version: "v1"
kind: "Namespace"
However, when I try to setup everything in the cluster I keep getting:
kubectl apply -f constraint_template.yaml
Error from server: error when creating "constraint_template.yaml": admission webhook "validation.gatekeeper.sh" denied the request: invalid ConstraintTemplate: invalid data references: check refs failed on module {template}: errors (2):
disallowed ref data.kubernetes.namespaces[ns].labels.project
disallowed ref data.kubernetes.namespaces[ns].labels.project
Do you know how to fix that and what I did wrong. Also, in case you happen to know a better approach just let me know.
Alternative to other answer, you may use validation webhook to enfore by any parameter present in the request. Example, name,namespace, annotations, spec etc.
The validation webhook could be a service running in the cluster or External to cluster. This service would essentially make a logical decision based on the logic we put. For every request Sent by user, api server send a review request to the webhook and the validation webhook would either approve or reject the review.
You can read more about it here, more descriptive post by me here.
If you want to enforce this rule on k8s object such as configmap or ingress, I think you can use something like OPA
In Kubernetes, Admission Controllers enforce semantic validation of objects during create, update, and delete operations. With OPA you can enforce custom policies on Kubernetes objects without recompiling or reconfiguring the Kubernetes API server.
reference

Cloud endpoints is it possible to protect all API accesses using only a base URL?

As in let's say my api is located at domain/_ah/api. We have domain/_ah/api/getUser, domain/_ah/api/stuff/getStuff, domain/_ah/api/stuff/moreStuff/postMoreStuff.
Is it possible to do that by only defining something like this?´
swagger: '2.0'
info:
title: "Cloud Endpoints + Cloud Run"
description: "Sample API on Cloud Endpoints with a Cloud Run backend"
version: "1.0.0"
host: "domain"
schemes:
- "https"
produces:
- "application/json"
x-google-backend:
jwt_audience: "audience"
address: "domain_backend"
protocol: "h2"
paths:
/_ah/api/*:
get, post, put, etc:
description: "Protects Base URL"
operationId: "authInfoFirebase"
security:
- firebase: []
securityDefinitions:
firebase:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
x-google-issuer: "https://securetoken.google.com/<project_id>"
x-google-jwks_uri: "https://www.googleapis.com/service_accounts/v1/metadata/x509/securetoken#system.gserviceaccount.com"
x-google-audiences: "<project_id>"
I am afraid Cloud Endpoints does not recognize wildcards as you specified.
Quoting the documentation:
“Endpoints only supports URL path template parameters that correspond to entire path segments (delimited by slashes /). URL path template parameters that correspond to partial path segments aren't supported.”[1]
A workaround to wildcards would be to use path templates.
You can use curly braces {} to mark parts of an URL as path parameters, using your example:
domain/_ah/api/{value1}
domain/_ah/api/{value1}/{value2}
domain/_ah/api/{value1}/{value2}/{value3}
Just be careful not to overlap the path templates, like in this example:
/items/{itemid} ---> This is valid
/items/{itemId}/subitem ----> This is valid
/items/cat ----> This is NOT valid
[1] https://cloud.google.com/endpoints/docs/openapi/openapi-limitations#url_path_templating

How do I add an nginx load balancer to a kubernetes cluster on Jelastic?

I have the following jps manifest:
jpsVersion: 1.3
jpsType: install
application:
id: my-app
name: My App
version: 0.0
settings:
fields:
- name: envName
caption: Env Name
type: string
required: true
- name: topo
type: radio-fieldset
values:
0-dev: '<b>Development:</b> one master (1) and one scalable worker (1+)'
1-prod: '<b>Production:</b> multi master (3) with API balancers (2+) and scalable workers (2+)'
default: 0-dev
- name: k8s-version
type: string
caption: k8s manifest version
default: v1.16.3
onInstall:
- installKubernetes
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
displayName: ${settings.envName}
settings:
deploy: cc
topo: ${settings.topo}
dashboard: version2
ingress-controller: Nginx
storage: true
api: true
monitoring: true
version: ${settings.k8s-version}
jaeger: false
Now, I'd like to add a load balancer in front of the k8s cluster, something like
env:
topology:
nodes:
- nodeGroup: bl
nodeType: nginx-dockerized
tag: 1.16.1
displayName: Node balancing
count: 1
fixedCloudlets: 1
cloudlets: 4
Of course, the above kubernetes jps installation creates a topology. Therefore, there is no way I can call the above env section. How can I add a new node to the topology created by the jelastic kubernetes jps? I found addNodes, but it does not seem to allow to define what comes into the bl node group.
In the Jelastic API, I was able to find the EditNodeGroup method, which I believe would solve my problem. However, the documentation is not very clear, it's kind of missing an example from which I could guess how to fill up the parameters. How do I use that method to add an nginx load balancer to my k8s environment?
EDIT
The EditNodeGroup method is of no use for that problem. I think, currently, my best option is to fork the jelastic-jps/kubernetes and adapt the beforeinstall for my needs. Do I have any other option? I browsed the API and found no way to add my nginx load balancer.
The environment topology cannot be changed during an external manifest invocation, since it's created within that manifest. But it can be altered after the manifest finish.
The whole approach is:
onInstall:
- installKubernetes
- addBalancer
actions:
installKubernetes:
install:
jps: https://github.com/jelastic-jps/kubernetes/blob/${settings.k8s-version}/manifest.jps
envName: ${settings.envName}
...
addBalancer:
- install:
envName: ${settings.envName}
jps:
type: update
name: Add Balancer Node
onInstall:
- addNodes:
....
Please refer https://github.com/jelastic-jps/kubernetes/blob/ad62208a5b3796bb7beeaedfce5c42b18512d9f0/addons/storage.jps example on how to use "addNodes" action in the manifest.
Also, the reference https://docs.cloudscripting.com/creating-manifest/actions/#addnodes describes all fields that can be used.
The latest published version of K8s for Jelastic is: v1.16.6, so you could use it in your manifest.
But, please note, that via this Balancer instance you will be accessing the default Kubernetes ingress controller, i.e. the same ingresses/paths that you currently have at "http(s)://".
Of course, you can assign a public ip to added BL, and access the same functionality not via Shared Balancers as before, but via public IP from now on.
In a nutshell, Jelastic Balancer instance currently doesn't provide a Kubernetes service LoadBalancer functionality — if you need exactly this one. The K8S LoadBalancer functionality will be added in the next release: public IPs added to "cp" worker can be automatically used for LoadBalancers created inside the Kubernetes cluster. We expect this functionality be added to 1.16.8+
Please let us know if you have any further questions.

Symfony2, Swagger 2 and Bundle TimeIncOSS swagger-bundle error

I work with symfony project and i want to generate documentation for my REST Web Service with swagger 2,so i installed swagger-bundle
and i configurid it.but when i tri to generate the json api documenation with this command
app/console -e=dev swagger:dump
i got this error.
You must call one of in() or append() methods before iterating over a
Finder.
this is my config file
swagger:
version: '2.0'
info:
title: 'My API'
version: '1.0.0'
description: 'My API Description'
host: '127.0.0.1'
base_path: '/v2'
schemes:
- http
produces:
- application/json
consumes:
- application/json
annotations:
bundles:
- BOBundleBundle
any help please.
check if you put the settings in the correct config file (app/config/config_dev.yml)

Configuring URI prefix for REST webservice in dropwizard

I am developing a REST API using dropwizard. The resource can be accessed using https://<host>:port/item/1. As it can be seen there is no URI prefix. If I have to configure a URI prefix what needs to be done. Can it be configured in yaml configuration file?
Thanks!
Yes the URI prefix a.k.a root path can be configured in YAML. You could use the simple server factory configuration. It's simple, add these two lines in your YAML. I've used 'api' as the prefix. You can replace it with the URI prefix you want.
server:
rootPath: '/api/*'
A slightly more elaborate server configuration looks something like this,
server:
adminConnectors:
-
port: 18001
type: http
adminContextPath: /admin
applicationConnectors:
-
port: 18000
type: http
rootPath: /api/*
type: default
You can refer to this example https://github.com/dropwizard/dropwizard/blob/master/dropwizard-example/example.yml for server and other configuration details.
It's also a good idea to go through this if you are just getting started with dropwizard http://www.dropwizard.io/0.9.2/docs/getting-started.html

Resources