How to prepend segments to OpenAPI paths? - nginx

I created a REST endpoint following this guide: https://quarkus.io/guides/rest-json
Locally I can successfully use swagger UI on <host>/q/swagger-ui which uses <host>/q/openapi as input. So far so good.
However, in production, I use Nginx to forward the requests to <host>/foobar. Thus, the final URLs change to <host>/foobar/q/swagger-ui and <host>/foobar/q/openapi.
nginx.conf snippet where the Quarkus Docker container is running on port 49321:
location /foobar/ {
proxy_pass http://172.17.0.1:49321/;
}
In the application.properties I already added the following line:
quarkus.swagger-ui.urls.direct=/foobar/q/openapi
By doing this, Swagger-UI finds the OpenAPI spec. But the OpenAPI spec contains the wrong URLs because it doesn't know about the /foobar/ URL segment.
How the OpenAPI looks:
---
paths:
/some/url:
get:
tags:
- blabla
responses:
"200":
description: OK
How it needs to look (/foobar/ prepended to path):
---
paths:
/foobar/some/url:
get:
tags:
- blabla
responses:
"200":
description: OK
I already checked available OpenAPI properties on https://quarkus.io/guides/openapi-swaggerui#openapi but they seem to not solve my problem. Any ideas?

I solved it by setting the following in the application.properties:
quarkus.http.root-path=/foobar
and configuring Nginx as follows (nginx.conf):
location /foobar {
proxy_pass http://172.17.0.1:49321/foobar;
}

Related

Adding HTML to backend response body by NGINX Kubernetes Ingress

Background
We're hosting a proprietary web application, which we like to customize rudimentary to our CD (mainly colors). Since the application doesn't support this and I don't have access to the source, I'd like to create a custom css stylesheet and include it in the app by manipulating it's ingress (= injecting css/stylesheets into the DOM).
http_sub_module of the Nginx Ingress Controller for Kubernetes
The http_sub_module seems similar to Apaches mod_substitute. When I exec nginx -V inside the nginx ingress pod, there is --with-http_sub_module listed in the configure arguments, so it must be avaliable in the currently deployed 1.19.2 version.
I found a similar question using subs_filter (instead of sub_filter). It seems that the one with s is from nginx plus, also documented here. All linked examples there use subs_, where the regular community documentation uses sub_filter. Both of them worked without an error, I guess the plus one is an alias if no plus subscription is avaliable.
Since sub_filter doesn't seem to work, I tried both of them without success:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/configuration-snippet: |
#subs_filter_once on;
#subs_filter_types 'text/html';
# subs_filter 'en' 'de';
# subs_filter '</body>' '<h1>TEST</h1></body>';
sub_filter '</body>' '<h1>TEST</h1></body>';
Further things I've checked/tried
The configuration snippet is applied. I looked into the nginx ingress pod, the server vhost block for my application has the sub_filter/subs_filter directive inside the / location block.
The default type for filtering is sub_filter_types text/html. The website returns Content-Type: text/html so it should match. To be sure, I also set sub_filter_types "text/html"; in the snippet. And even * which matches any mime type according to the docs, both without any difference.
Doesn't this module work with mod_proxy used by k8s? I can't imagine that since the module is relatively old and I see no reason why it shouldn't work when nginx acts as reverse proxy, since it needs to have access to the HTML header/body too.
Found out that the problem was something completely different: The application supports gzip, so this compression was enabled. But the http_sub_module doesn't support gzip, as documented here. It only works with plain text reasponses. If the response is compressed, the module just do nothing. This explains why it doesn't work and there was no error.
Luckily, the compression could be easily disabled without any modifications on the application by setting the following header:
proxy_set_header Accept-Encoding "";
If this is added to the nginx.ingress.kubernetes.io/configuration-snippet section, it accepts only plain text and every application which is compilant to the specification will respect that.
But it doesn't mean that we can't use any compression. The Gzip filter can be used from the ingress to the user, it's only not supported from the ingress to the application.
Full working example annotation snippet
ingres:
# ...
annotations:
kubernetes.io/ingress.class: "nginx"
# http://nginx.org/en/docs/http/ngx_http_sub_module.html
nginx.ingress.kubernetes.io/configuration-snippet: |
sub_filter "</body>" "<style>body{ color: red !important; }</style></body>";
# The http_sub_module doesn't support compression from the ingress to the backend application
proxy_set_header Accept-Encoding "";
This would apply the style block before the closing body tag like this:
<style>body{ color: red !important; }</style></body></html>
For productive usage, we can include a link to a custom css file here, which overrides the applications design to our needs.

Using x-google-endpoints OpenAPI extension with multiple API versions in Cloud Endpoints

I'm trying to deploy two API versions to Google Cloud Endpoints but end up facing configuration issues during the deployment.
API definition api-1.yaml looks like this:
swagger: "2.0"
info:
description: "API"
title: "API"
version: "1.0.0"
host: "api.endpoints.GCP_PROJECT.cloud.goog"
basePath: "/v1"
x-google-api-name: v1
x-google-endpoints:
- name: "api.endpoints.GCP_PROJECT.cloud.goog"
target: "IP_ADDRESS"
...
This works just fine if deployed alone. However if api-2.yaml is added:
swagger: "2.0"
info:
description: "API"
title: "API"
version: "2.0.0"
host: "api.endpoints.GCP_PROJECT.cloud.goog"
basePath: "/v2"
x-google-api-name: v2
x-google-endpoints:
- name: "api.endpoints.GCP_PROJECT.cloud.goog"
target: "IP_ADDRESS"
...
This leads to deployment error: OpenAPI spec is invalid. Multiple endpoint entries are defined in the extension \'x-google-endpoints\'. At most one entry is allowed.
Removing x-google-endpoints extension from one yaml file works but it leaves another yaml file as incomplete and thus, not optimal solution.
Could there be and issue with combining/validating yaml files during deployment? Can x-google-endpoints extension be used to define .cloud.goog domain for versioned API's?
There are two ways to do this:
1) version is in the domain name, such as v1-api.endpoints.GCP_PROJECT.cloud.goog.
You define and deploy two services; one for v1 and another for v2. Each has its own IP, proper service name and its own backend. This is most straightforward way and easy approach.
2) version is in the path, such as api.endpoints.GCP_PROJECT.cloud.goog/v1. You can only define and deploy one service. But you have two backends. This is tricky one. You can use x-google-backend extension in the OpenApi spec and deploy one Cloud Endpoint service.
You are using two ESP proxy as: v1_esp -> v1, v2_esp -> v2.
Each proxy has its own IP and you are trying to bind a domain name to two IPs. This is not supported.
My suggestion is to use one ESP as such:
esp -> v1 + v2 by using x-google-backend.
With the following:
In the openApi.yaml
paths:
/v1/path1:
...
x-google-backend:
address: v1_host
# do above for all your path/methods
/v2:
...
x-google-backend:
address: v2_host
# do above for all your path/methods
x-google-endpoints:
- name: "api.endpoints.GCP_PROJECT.cloud.goog"
target: "IP_ADDRESS"
add --enable_backend_routing and --rewrite to remove /v1 and /v2 prefix before sending the request to your backends.
We have not tested such deployment, but you can try it.

How to disable interception of errors by Ingress in a Tectonic kubernetes setup

I have a couple of NodeJS backends running as pods in a Kubernetes setup, with Ingress-managed nginx over it.
These backends are API servers, and can return 400, 404, or 500 responses during normal operations. These responses would provide meaningful data to the client; besides the status code, the response has a JSON-serialized structure in the body informing about the error cause or suggesting a solution.
However, Ingress will intercept these error responses, and return an error page. Thus the client does not receive the information that the service has tried to provide.
There's a closed ticket in the kubernetes-contrib repository suggesting that it is now possible to turn off error interception: https://github.com/kubernetes/contrib/issues/897. Being new to kubernetes/ingress, I cannot figure out how to apply this configuration in my situation.
For reference, this is the output of kubectl get ingress <ingress-name>: (redacted names and IPs)
Name: ingress-name-redacted
Namespace: default
Address: 127.0.0.1
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
public.service.example.com
/ service-name:80 (<none>)
Annotations:
rewrite-target: /
service-upstream: true
use-port-in-redirects: true
Events: <none>
I have solved this on Tectonic 1.7.9-tectonic.4.
In the Tectonic web UI, go to Workloads -> Config Maps and filter by namespace tectonic-system.
In the config maps shown, you should see one named "tectonic-custom-error".
Open it and go to the YAML editor.
In the data field you should have an entry like this:
custom-http-errors: '404, 500, 502, 503'
which configures which HTTP responses will be captured and be shown with the custom Tectonic error page.
If you don't want some of those, just remove them, or clear them all.
It should take effect as soon as you save the updated config map.
Of course, you could to the same from the command line with kubectl edit:
$> kubectl edit cm tectonic-custom-error --namespace=tectonic-system
Hope this helps :)

Tyk gateway with Nginx and Apache Tomcat 8 (ubuntu 14.04)

Just wondering what I am missing here when trying to create an API with Tyk Dashboard.
My setup is:
Nginx > Apache Tomcat 8 > Java Web Application > (database)
Nginx is already working, redirecting calls to apache tomcat at default port 8080.
Example: tomcat.myserver.com/webapp/get/1
200-OK
I have setup tyk-dashboard and tyk-gateway previously as follows using a custom node port 8011:
Tyk dashboard:
$ sudo /opt/tyk-dashboard/install/setup.sh --listenport=3000 --redishost=localhost --redisport=6379 --mongo=mongodb://127.0.0.1/tyk_analytics --tyk_api_hostname=$HOSTNAME --tyk_node_hostname=http://127.0.0.1 --tyk_node_port=8011 --portal_root=/portal --domain="dashboard.tyk-local.com"
Tyk gateway:
/opt/tyk-gateway/install/setup.sh --dashboard=1 --listenport=8011 --redishost=127.0.0.1 --redisport=6379 --domain=""
/etc/hosts already configured (not really needed):
127.0.0.1 dashboard.tyk-local.com
127.0.0.1 portal.tyk-local.com
Tyk Dashboard configurations (nothing special here):
API name: foo
Listen path: /foo
API slug: foo
Target URL: tomcat.myserver.com/webapp/
What URI I suppose to call? Is there any setup I need to add in Nginx?
myserver.com/foo 502 nginx
myserver.com:8011/foo does not respond
foo.myserver.com 502 nginx
(everything is running under the same server)
SOLVED:
Tyk Gateway configuration was incorrect.
Needed to add --mongo and remove --domain directives at setup.sh :
/opt/tyk-gateway/install/setup.sh --dashboard=1 --listenport=8011 --redishost=localhost --redisport=6379 --mongo=mongodb://127.0.0.1/tyk_analytics
So, calling curl -H "Authorization: null" 127.0.0.1:8011/foo
I get:
{
"error": "Key not authorised"
}
I am not sure about the /foo path. I think that was previously what the /hello path is. But it appears there is a key not authorized issue. If the call is made using the Gateway API, then the secret value may be missing. It is required when making calls to the gateway (except the hello and reload paths)
x-tyk-authorization: <your-secret>
However, since there is a dashboard present, then I would suggest using the Dashboard APIs to create the API definition instead.

Configuring URI prefix for REST webservice in dropwizard

I am developing a REST API using dropwizard. The resource can be accessed using https://<host>:port/item/1. As it can be seen there is no URI prefix. If I have to configure a URI prefix what needs to be done. Can it be configured in yaml configuration file?
Thanks!
Yes the URI prefix a.k.a root path can be configured in YAML. You could use the simple server factory configuration. It's simple, add these two lines in your YAML. I've used 'api' as the prefix. You can replace it with the URI prefix you want.
server:
rootPath: '/api/*'
A slightly more elaborate server configuration looks something like this,
server:
adminConnectors:
-
port: 18001
type: http
adminContextPath: /admin
applicationConnectors:
-
port: 18000
type: http
rootPath: /api/*
type: default
You can refer to this example https://github.com/dropwizard/dropwizard/blob/master/dropwizard-example/example.yml for server and other configuration details.
It's also a good idea to go through this if you are just getting started with dropwizard http://www.dropwizard.io/0.9.2/docs/getting-started.html

Resources