Testing NGINX configuration - nginx

I have a reverse proxy server with NGINX and I want to test its configuration automatically.
What I want to achieve in the end is to have a command that I can run, it starts the NGINX with the configuration, run several http requests, and then track and gather whether the right proxied server was called.
I've been thinking on setting up an environment with docker-compose and use curl/wget with the list of urls I want to test. The thing that I don't know is how to mock certain domains and track the forwarded requests.
Is there a tool to do that or should I write a server manually?

After experimenting a bit I managed to create this solution.
Use Docker Compose, Wiremock and Newman. The idea is to setup NGINX proxying requests to Wiremock (where you can control if the request matched the right structure), then with Newman, you can run a Postman collection that automatically checks that the stubbed responses are the right ones.
Example
Create all these files in a folder, get the testing environment by running
docker-compose up -d nginx wiremock
and then, to run the test suite
docker-compose run --rm newman
It should print the results of the collection.
Files
docker-compose.yml
version: "3"
services:
nginx:
image: nginx
ports:
- "80:80"
volumes:
- ./config:/etc/nginx
wiremock:
image: wiremock/wiremock:2.32.0
command: [ "--port", "80", "--verbose" ]
ports:
- "8080:80"
volumes:
- ./wiremock:/home/wiremock
networks:
default:
aliases:
- backend-service-1
- backend-service-2
newman:
image: postman/newman
volumes:
- ./newman:/etc/newman
command: [ "run", "example.postman_collection.json" ]
config/nginx.conf
events {
worker_connections 1024;
}
http {
resolver 127.0.0.11; # docker internal resolver
server {
listen 80 default_server;
location /some/path/ {
proxy_set_header X-Forwarded-Host $host;
proxy_pass http://backend-service-1/some/path;
}
location /other/path/ {
proxy_set_header X-Forwarded-Host $host;
proxy_pass http://backend-service-2/other/path;
}
}
}
wiremock/mappings/some-path.json
{
"request": {
"method": "GET",
"url": "/some/path",
"headers": {
"Host": {
"equalTo": "backend-service-1",
"caseInsensitive": true
}
}
},
"response": {
"status": 200,
"body": "{\"host\": \"from-1\"}",
"headers": {
"Content-Type": "application/json"
}
}
}
wiremock/mappings/other-path.json
{
"request": {
"method": "GET",
"url": "/other/path",
"headers": {
"Host": {
"equalTo": "backend-service-2",
"caseInsensitive": true
}
}
},
"response": {
"status": 200,
"body": "{\"host\": \"from-2\"}",
"headers": {
"Content-Type": "application/json"
}
}
}
newman/example.postman_collection.json
{
"info": {
"name": "example",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
},
"item": [
{
"name": "some path",
"event": [
{
"listen": "test",
"script": {
"exec": [
"pm.test(\"request backend service 1\", function () {",
" pm.response.to.have.status(200);",
"",
" var jsonData = pm.response.json();",
" pm.expect(jsonData.host).to.eql(\"from-1\");",
"});",
""
],
"type": "text/javascript"
}
}
],
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "http://nginx/some/path/",
"protocol": "http",
"host": [
"nginx"
],
"path": [
"some",
"path",
""
]
}
},
"response": []
},
{
"name": "other path",
"event": [
{
"listen": "test",
"script": {
"exec": [
"pm.test(\"request backend service 2\", function () {",
" pm.response.to.have.status(200);",
"",
" var jsonData = pm.response.json();",
" pm.expect(jsonData.host).to.eql(\"from-2\");",
"});",
""
],
"type": "text/javascript"
}
}
],
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "http://nginx/other/path/",
"protocol": "http",
"host": [
"nginx"
],
"path": [
"other",
"path",
""
]
}
},
"response": []
}
]
}

Related

Ocelot Swagger MMLib.SwaggerForOcelot showing "No operations defined in spec!"

I am using Ocelot gateway and for swagger document using "MMLib.SwaggerForOcelot" library.
For some swagger Key, swagger UI is showing "No operations defined in spec!" and swagger JSON is coming without paths like
{
"openapi": "3.0.1",
"info": {
"title": "Admin API",
"version": "v1"
},
"paths": {},
"components": {
"schemas": {}
}
}
Ocelot Configuration Route is
{
"DownstreamPathTemplate": "/api/admin/v{version}/{everything} ",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": 5000
}
],
"UpstreamPathTemplate": "/api/admin/v{version}/{everything}",
"UpstreamHttpMethod": [],
"QoSOptions": {
"ExceptionsAllowedBeforeBreaking": 3,
"DurationOfBreak": 1000,
"TimeoutValue": 900000
},
"SwaggerKey": "AdminAPI"
}
and Swagger Configuration is
"SwaggerEndPoints": [
{
"Key": "AdminAPI",
"Config": [
{
"Name": "Admin API",
"Version": "v1",
"Url": "http://localhost:5000/swagger/v1/swagger.json"
}
]
}
]
after reviewing the MMLib.SwaggerForOcelot source code, it looks like something to do with the version in the downstream path, any clue on how this can be fixed?
The issue is that MMLib.SwaggerForOcelot is not considering {version} while doing Ocelot transformation.
RouteOptions has a property TransformByOcelotConfig which is set to true by default, so once swagger JSON is obtained from downstream, the transformation will be done.
here, it will try to find the route from the route configuration like below and if not found it will delete the route from swagger JSON
private static RouteOptions FindRoute(IEnumerable<RouteOptions> routes, string downstreamPath, string basePath)
{
string downstreamPathWithBasePath = PathHelper.BuildPath(basePath, downstreamPath);
return routes.FirstOrDefault(p
=> p.CanCatchAll
? downstreamPathWithBasePath.StartsWith(p.DownstreamPathWithSlash, StringComparison.CurrentCultureIgnoreCase)
: p.DownstreamPathWithSlash.Equals(downstreamPathWithBasePath, StringComparison.CurrentCultureIgnoreCase));
}
The problem is StartsWith will return false since swagger JSON path will be like
/api/admin/v{version}/Connections
and route config is like
/api/admin/v{version}/{everything}
and version will replace with the version given in swagger options so that it will become
/api/admin/v1/{everything}
Fix to this problem will be
Either set "TransformByOcelotConfig":false in swagger option
"SwaggerEndPoints": [
{
"Key": "AdminAPI",
"TransformByOcelotConfig":false,
"Config": [
{
"Name": "Admin API",
"Version": "v1",
"Url": "http://localhost:5000/swagger/v1/swagger.json"
}
]
}
]
Or Change the route, just to have {everything} keyword
{
"DownstreamPathTemplate": "/api/admin/{everything} ",
"DownstreamScheme": "http",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": 5000
}
],
"UpstreamPathTemplate": "/api/admin/{everything}",
"UpstreamHttpMethod": [],
"QoSOptions": {
"ExceptionsAllowedBeforeBreaking": 3,
"DurationOfBreak": 1000,
"TimeoutValue": 900000
},
"SwaggerKey": "AdminAPI"
}

Wildcard path proxy to Google Cloud Run

I have two cloud run services (Next.js and API server) and I want to serve them through a single endpoint.
I want requests to /api to be forwarded to API service and all other other requests (/*) to be forwarded to Next.js server.
Cloud Run documentation suggests that I use Endpoint but it does not seem to support wildcard paths.
What are the possible alternatives?
Google API Gateway supports different wildcards, I'using Google Functions as backend, but that shouldn't make a different when using the GW with Google Run.
My scenario:
/ should route to the /index.html
/assets should route to the /assets/any-file-here.png
/logo-256.png should route to the /logo-256.png
/endpoint-1 should route only to the API, but hosted on another function
/endpoint-2/some-param should route only to the API, hosted on the same function as the assets
With this configuration everything get's routed liked wanted, using Double Wildcard Matching.
It doesn't matter that the wildcard is before specific routes, this is handled correctly by the gateway.
{
"swagger": "2.0",
"info": {
"version": "0.0.1",
"title": "Some API w/ Assets"
},
"paths": {
"/": {
"get": {
"summary": "home",
"operationId": "home",
"parameters": [],
"x-google-backend": {
"address": "https://THE-GOOGLE-RUN-OR-FUNCTION",
"path_translation": "CONSTANT_ADDRESS"
},
"responses": {
"200": {
"description": "Home"
}
}
}
},
"/{files=**}": {
"get": {
"summary": "assets",
"operationId": "assets",
"parameters": [
{
"in": "path",
"name": "files",
"type": "string",
"required": true
}
],
"x-google-backend": {
"address": "https://THE-GOOGLE-RUN-OR-FUNCTION",
"path_translation": "APPEND_PATH_TO_ADDRESS"
},
"responses": {
"200": {
"description": "assets"
}
}
}
},
"/endpoint-1": {
"get": {
"summary": "Some pure backend api",
"operationId": "ep1",
"x-google-backend": {
"address": "https://SOME-OTHER-GOOGLE-RUN-OR-FUNCTION",
"path_translation": "APPEND_PATH_TO_ADDRESS"
},
"parameters": [],
"responses": {
"200": {
"description": "result values"
}
}
}
},
"/endpoint-2/{some_param}": {
"get": {
"summary": "Some pure backend API with path param",
"operationId": "ep2",
"parameters": [
{
"in": "path",
"name": "some_param",
"type": "string",
"required": true
}
],
"x-google-backend": {
"address": "https://THE-GOOGLE-RUN-OR-FUNCTION",
"path_translation": "APPEND_PATH_TO_ADDRESS"
},
"responses": {
"200": {
"description": "result values"
}
}
}
}
}
}
But with this setup, your page won't be that fast, I recommend adding Google Load Balancer with Google CDN before your API gateway when you are serving files.
This is best addressed through the usage of Firebase hosting since they have a tutorial to do just this.
Hope you find this useful

How to generate mock server for pact consumer from contract json file?

I want to use contract file from provider to run tests against consumer.
I have
{
"provider": {
"name": "Provider"
},
"consumer": {
"name": "Consumer"
},
"interactions": [
{
"description": "Get data",
"request": {
"method": "Get",
"path": "/data/1"
},
"response": {
"status": 200,
"headers": {
"Content-Type": "application/json"
},
"body": {
"message": ""
}
},
"providerState": "state"
}
],
"metadata": {
"pact-specification": {
"version": "2.0.0"
},
"pact-jvm": {
"version": "3.5.6"
}
}
And want to use it to generate pact mock server like:
RequestResponsePact pact = new RequestResponsePact(mockServerDescriptionString);
Is it possible to do this?
No. But you can use the pact-stub-server or the pact-stub-service CLI

Httpbeat Metrics not showing up in Kibana

Dears,
I'm new to Kibana/Elasticsearch/Httpbeat and setting it up is causing me a bit of a headace...
Httpbeat runs and pumps data into Elasticsearch:
Although, when I try to create a visualization I run into trouble;
somehow the data is not there...
This might also be usefull:
And the template json:
{
"mappings": {
"_default_": {
"_meta": {
"version": "5.4.0"
},
"dynamic_templates": [
{
"strings_as_keyword": {
"mapping": {
"ignore_above": 1024,
"type": "keyword"
},
"match_mapping_type": "string"
}
}
],
"properties": {
"#timestamp": {
"type": "date"
},
"beat": {
"properties": {
"hostname": {
"ignore_above": 1024,
"type": "keyword"
},
"name": {
"ignore_above": 1024,
"type": "keyword"
},
"version": {
"ignore_above": 1024,
"type": "keyword"
}
}
},
"meta": {
"properties": {
"cloud": {
"properties": {
"availability_zone": {
"ignore_above": 1024,
"type": "keyword"
},
"instance_id": {
"ignore_above": 1024,
"type": "keyword"
},
"machine_type": {
"ignore_above": 1024,
"type": "keyword"
},
"project_id": {
"ignore_above": 1024,
"type": "keyword"
},
"provider": {
"ignore_above": 1024,
"type": "keyword"
},
"region": {
"ignore_above": 1024,
"type": "keyword"
}
}
}
}
},
"request": {
"properties": {
"body": {
"ignore_above": 1024,
"type": "keyword"
},
"headers": {
"properties": {},
"type": "nested"
},
"method": {
"ignore_above": 1024,
"type": "keyword"
},
"url": {
"ignore_above": 1024,
"type": "keyword"
}
}
},
"response": {
"properties": {
"body": {
"ignore_above": 1024,
"type": "keyword"
},
"code": {
"ignore_above": 1024,
"type": "keyword"
},
"headers": {
"properties": {},
"type": "nested"
},
"jsonBody": {
"properties": {
"globalTime": {
"type": "long"
}
}
}
}
},
"tags": {
"ignore_above": 1024,
"type": "keyword"
}
}
}
},
"order": 0,
"settings": {
"index.mapping.total_fields.limit": 10000,
"index.refresh_interval": "1m"
},
"template": "httpbeat-*"
}
The httpbeat.yml
######################## Httpbeat Configuration Example ########################
############################## Httpbeat ########################################
httpbeat:
hosts:
# Each - Host endpoints to call. Below are the host endpoint specific configurations
-
# Optional cron expression, defines when to poll the host endpoint.
# Default is every 1 minute.
schedule: "#every 1m"
# The URL endpoint to call by Httpbeat
url: (a correct url)
# HTTP method to use.
# Possible options are:
# * get
# * delete
# * head
# * patch
# * post
# * put
method: get
# Optional additional headers to send to the endpoint
#headers:
#Accept: application/json
# Optional basic authentication
basic_auth:
# Basic authentication username
username: theetsa
# Basic authentication password
password: (a very secret password)
# Type to be published in the 'type' field. For Elasticsearch output,
# the type defines the document type these entries should be stored
# in. Default: httpbeat
#document_type:
# Optional output format for the response body.
# Possible options are:
# * string
# * json
# Default output format is 'string'
output_format: json
# Optional convertion of dots in keys in JSON response body. By default is off.
# Possible options are:
# * replace - replaces dots with a different character. The default value is `_`.
# * unflatten - converts {"foo.bar":false} to {"foo":{"bar":false}}
#json_dot_mode: replace
# Optional additional headers to send to the endpoint
#headers:
#Accept: application/json
# Enable SSL support. SSL is automatically enabled, if any SSL setting is set.
#ssl.enabled: true
# Configure SSL verification mode. If `none` is configured, all server hosts
# and certificates will be accepted. In this mode, SSL based connections are
# susceptible to man-in-the-middle attacks. Use only for testing. Default is
# `full`.
#ssl.verification_mode: full
# List of supported/valid TLS versions. By default all TLS versions 1.0 up to
# 1.2 are enabled.
#ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2]
# Optional SSL configuration options. SSL is off by default.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# Optional passphrase for decrypting the Certificate Key.
#ssl.key_passphrase: ''
# Configure cipher suites to be used for SSL connections
#ssl.cipher_suites: []
# Configure curve types for ECDHE based cipher suites
#ssl.curve_types: []
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
I really don't know what I'm doing wrong :-/
I tried to use the same settings as in Metricbeat; where the graphs do work, I also looked inside the logs but couldn't find anything usefull there...
I noticed that the beat version is 4.0.0; which might be the issue, I really don't know :-/
Thanks for any help or pointers...
S.
I'm not sure what did the trick but I
Stopped httpBeat
stopped elasticSearch
deleted all indexes
rm -Rf data/nodes/0/*
restarted elasticSearch
used this template:
httpbeat.template-es2x.json:
{
"mappings": {
"my_type": {
"_meta": {
"version": "5.4.0"
},
"dynamic_templates": [
{
"integers": {
"match_mapping_type": "long",
"mapping": {
"type": "integer"
}
}
}
],
"properties": {
"#timestamp": {
"type": "date"
},
"response": {
"properties": {
"jsonBody": {
"properties": {
"globalTime": {
"type": "long"
}
}
}
}
}
},
"fields": {
"properties": {}
}
-> more about this below...
}
},
"order": 0,
"settings": {
"index.mapping.total_fields.limit": 10000,
"index.refresh_interval": "1m"
},
"template": "httpbeat-*"
}
and restarted everything
I think mostly the 'fields' was important; when I used the template without I got an error in Kibana about something with 'fields' and:
"fields": {
"properties": {}
}
Was something that was present inside metricbeat.template-es2x.json and not in httpbeat.template-es2x.json it seems to work with that field inside httpbeat.template-es2x.json and not httpbeat.template.json...
Grtz,
S.
ps: if you have an answer that is not based on trail and error I'll accept that instead of this one...

ElasticBeanstalk MultiContainer docker with nginx

I have two applications that handle different, but related functionality. I would like to deploy them as a single entity on a single host:port.
My plan is to use elasticbeanstalk's multicontainer docker platform. Each application would be a container.
How can I tie them together? Is it possible to install and configure nginx on the eb host?
You need to define all containers that comprise your application (together with nginx container) in Dockerrun.aws.json.
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "nginx-proxy-conf",
"host": {
"sourcePath": "/var/app/current/conf.d"
}
}
],
"containerDefinitions": [
{
"name": "first-app",
"image": "FIRST_APP_IMAGE_NAME:FIRST_APP_TAG",
"environment": [],
"essential": true,
"memoryReservation": 200,
"mountPoints": [],
"portMappings": [
{
"hostPort": 8081,
"containerPort": 8080
}
]
},
{
"name": "secondapp",
"image": "SECOND_APP_IMAGE_NAME:SECOND_APP_TAG",
"environment": [],
"essential": true,
"memoryReservation": 200,
"mountPoints": [],
"portMappings": [
{
"hostPort": 8082,
"containerPort": 8080
}
]
}
{
"name": "nginx-proxy",
"image": "nginx",
"essential": true,
"memoryReservation": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"links": [
"firstapp", "secondapp"
],
"mountPoints": [
{
"sourceVolume": "nginx-proxy-conf",
"containerPath": "/etc/nginx/conf.d",
"readOnly": true
}
]
}
]
}
Now as we linked app containers to nginx container we can refer to them using their names as hostnames.
And then you need to deploy Dockerrun.aws.json zipped together with nginx config conf.d/default.conf file (put into conf.d folder) in which you need to specify
location /firstapp/ {
proxy_pass http://firstapp;
}
location /secondapp/ {
proxy_pass http://secondapp;
}
Please also refer to AWS example of nginx proxy before php application.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html

Resources