Lua OpenResty testing - nginx

How can I mock the ngx object and test my Lua files?
For example, let's say I have
-- file.lua
function computeUpstream()
-- advanced calculations!
return theCalculatedUpstream
end
ngx.var.upstream = computeUpstream()
And I want to test this file. How do I do that?

IMO the best solution is to use official OpenResty Docker images, run your configuration within container and test by series of HTTP request.
Using Docker (and may be docker-compose) one may simulate a whole infrastructure of tested application, mock backends, database with initial content, etc.
After some tests and tries you will find a way for good enough code coverage.

Related

Can I set up a mock server by cloning real server responses?

TL;DR
Is there a tool can record all the network activity as I visit a website and create a mock server that responds to those requests with the same responses?
I'm investigating ways of mocking the complex backend for our React application. We're currently developing against the real backend (plus test/staging environments). I've looked around a bit and it looks there are a number of tools for mocking individual endpoints/features and sending the rest through to the real API (Mirage is leading the pack at the moment).
However, the Platonic ideal would be to mock the entire server so that a front end dev can work without an internet connection (again: Platonic ideal). It's a crazy lofty goal, I know this. And of course it would require mocking not only our backend but also requests any 3rd-party data sources. And of course the data would be thin and dumb and stale. But this is just for ultra-speedy front end development, it's just mocking. The data doesn't need to be rich, it'll be up to us to make it as useful/realistic as we need it to be.
Probably the quickest way would be to recreate the responses the backend is already sending, and then modifying as needed for new features or features under test etc.
To do this, we might go into Chrome DevTools and recreate everything on the network tab. Mock every request that was made by hardcoding response that returned. Taking it from there, do smart things like use url pattern matching to return a simple placeholder image for any request to get a user's avatar.
What I want to know is: is there any tool out there that does this automatically? That can watch as I load the site, click a bunch of stuff, take a bunch of actions, and spit out or set up a mock that recreates all the responses? And then we could edit any of them as we saw fit to simplify.
Does something like this exist? Maybe it's a browser tool. Maybe it's webpack middleware. Maybe it's a magic rooster.
PS. I imagine this may not be a specific, actionable enough question for SO. I'll understand if it's closed, but I'd really appreciate being directed somewhere where such questions/discussions would fit? I'm new enough to this world that SO is all I know!
There is a practice called service virtualization - a subset of the test double family.
Wikipedia has a list of tools you can use to do that. Here a couple of examples from that list:
Open Source Wiremock will let to record the mocks and edit the responses programmaticaly
Commercial Traffic Parrot will let you record the mocks and edit the responses via a UI and/or programatically
https://mswjs.io/ can mock all the requests for you. It intercepts all your client`s requests and returns your defined mock data.

How to retry R testthat test on API error?

Some tests rely on some external services (e.g. APIs). Sometimes, these external services will go down. This can cause tests to fail and (worse) continuous integration tools to fail.
Is there a way to instruct testthat tests and regular package examples re-run examples/tests more than once, ideally with the second attempt being 5 minutes after the first?
Ideally you would write your tests in a way that they don't call API or database.
Instead you will mock API end points according to the specification and also write test for cases where API returns unexpected results or errors.
Here is an example of package that allows you to do so:
https://github.com/nealrichardson/httptest
If you are worried that your vendor might change API, talk to them and extract details on their API change management.
Ask them this:
What is your change management process?
How do you avoid introducing break changes to existing endpoints that people are using?
(modified from this post)
If you have to check that API is still the same, draw the line between API validation and testing of your code.
You will need two separate processes:
Unit / Acceptance tests that are executed against the mocks of the API end points. Those run fast and are focused on the logic of your application.
pipeline for regular validation of the API. If your code is already live, you are likely to find out of any breaking changes in API anyway. So this is highly redundant. In exceptional cases this can be useful, but only with a very bad vendor.

Collect NGINX access.log statistics to Prometheus

There is nginx web server, that serves API calls from different User-Agents. I want to parse nginx logs and collect statistics about API calls from different User-Agents.
I'm going to write python script to parse nginx access.log like this https://gist.github.com/sysdig-blog/22ef4c07714b1a34fe20dac11a80c4e2#file-prometheus-metrics-python-py
Is there more suitable solution?
I highly discourage this approach.
Parsing logs is an old task, and there are many tools out there that are more than capable of doing this in an efficient way.
For me personally, I had success with Fluentd - Open Source Data Collector, but there are more tools, depending on your specific needs.
The community, e.g, the amount, and quality of plugins/addons to the tool, is relevant when choosing the tool.
So if googling fluentd prometheus gets you some results from github and the developer itself - that might be your right course of action.
When an application doesn't expose whitebox monitoring endpoints, parsing the logs is the only solution.
From there, you have multiple choices depending on the scale and the budget of your setup:
centralizing logs (in ES by example) using a sidecar like Filebeat to parse and ship them. You can then make queries to export statistics
log parsing that expose statistics: fluentd, telegraf, mtail are good examples
regular executions of a script that dump the data in a prom file to be collected by a node exporter is also a cheap solution
Rolling your own script would be a last resort: if you need statistics you cannot get from of the shelve tools or statistics that need context to be extracted. But it comes at the cost of handling painful scenarios; in your case, following the file when it rolls can be an issue.

How to test different versions of backend services on Kubernetes?

I have a frontend instance (Angular app on nginx), which proxies calls to backend under a specific domain (let's say backend-app). Everything's easy when there is only one instance of both backend and frontend - I name the Service backend-app and DNS resolves to the correct backend Deployment.
Let's say I have another version of backend which I would like to test before merging to master. As nginx configuration of frontend instance is hardcoded to proxy to backend-api, creating another Service under the same name for a newer version of backend doesn't work.
I considered these options:
Making an environment variable and substituting domain name in nginx proxy configuration during runtime. This way I could be flexible about where do I want to route frontend calls to. Cons of this solution, as far as I have investigated, is that this approach beats the purpose of self-containment, that is, it becomes ambiguous what is the frontend's backend client and this type of configuration is prone to errors.
Creating different namespace every time I want to test things. While this allows spinning the whole stack without any problem or conflict, it seems to me that it's a huge overhead to create namespaces just for testing something once.
Having some fancy configurations combining labels and selectors. I couldn't think or find how to do it.
Any other opinions/suggestions you might have?
Try this approach
add label name:backend-1 to backend1 pod
add label name:backend-2 to backend2 pod
create a service using backend-1 selector.
to test against other backend, say backend2, all you have to do is edit the service yaml file and update the selector. you can toggle this way to test between backend1 and backend2
Are you using open shift. If yes then you can divide load between services by percentage using route.
Check blue/green and canary deployment options for more details

Strategy for automated testing of a third-party service error

I'm trying to write an automated test for my app's response for a third party service being down.
Generally the service is always up. I'm looking for a reliable way to simulate it being down notably without requiring root access. To put another wrinkle in it: The application under test would be in a separate process. I thought of just altering the configuration pointing at the service but that's not going to work.
This is all happening in a unix environment (linux, os x) so I'd like it to work there, but I don't care about windows. Is there a quick way to block an outgoing port or something like that? It also has to be temporary as this has to happen in the middle of a larger test suite.
Hopefully there is a fairly standard way of doing this that I just haven't found yet.
Clarification: This is a functional test to make sure the gui responds correctly when the service is down. The unit test is already covered.
Make a proxy for the service. Point at the proxy for the service. Shoot down the proxy for the service.
If your application has been designed with unit-testing in mind, you should be able to replace the 3rd party service with a different implementation.
The idea is that you can replace the service with a mock object which returns dummy data for testing. You could also replace the service with an object that throws an exception, or times-out.
With the mock service in place, the tester can use the application and see how it responds to service failures.

Resources