Mocking with Serverless Offline and Integration Tests - integration-testing

I have a Serverless stack (AWS) using API Gateway authentication and a Lambda, implementing a restful API using NestJS.
I'm using Serverless-offline to simulate the stack in my local environment. This allows me to simulate the API Gateway authentication (simple keys, not custom authorizer) and lambda execution from an end to end API call perspective.
I can use the NestJS test helpers to perform e2e tests, which allows me to inject mocks for other services not available in the stack.
What I'd like to do is use serverless-offline to run the tests - hence allowing me to test authentication via its simulated API Gateway. I can see how I can do this by launching serverless-offline in my tests (e.g. https://dev.to/didil/serverless-testing-strategies-4g92).
BUT, if I use serverless-offline (as link) then I can't see how it would be possible to inject mocks for other services not available in the stack.
Is there another solution for e2e testing that allows me to simulate the api gateway AND inject mocks?
Any help much appreciated!

With the testing strategy you linked you can mock your requests and responses to external services to test different scenarios. Nock is a library that can simplify mocking external requests in your tests.
Although, it seems that this may not be work as a I imagine with serverless-offline. I found this answer that outlines a strategy for replacing endpoints that access external services when running tests.

Related

Cypress with NextJS SSR - intercept RESTful API using Axios

I'm trying to write some tests with cypress and fixtures on my SSR next.js app that connects to RESTful API using Axios. But I'm having trouble intercepting the RESTful APIs using cy.intercept() because cypress can not track the requests that are sent in SSR and cy.intercept() only works on requests that cypress can track. Is there any way that can help me change the responses coming from RESTful API? any packages also would help.
cy.intercept relies on the in-browser API to capture requests. The requests you do in your SSR hooks in Next.js (such as getServerSideProps) do not happen in the browser so, respectively, cy.intercept doesn't know anything about them.
I'm biased when it comes to API mocking solutions, but I still encourage you to look into MSW once again. See the official Next.js example, which supports both browser-side and server-side request interception using the same request handlers. The README also goes into detail about the key steps necessary for both interceptions to work.
This kind of interception embeds MSW into the Next.js app. This means that you won't be able to have runtime request handlers without either restarting the runtime or exposing the worker/server instance to the testing context. This may not be an issue for you per se, so you may disregard this mention until you know it's relevant to your testing setup.

Azure ASP.NET REST API and Database deployment

We started our planning phase on a new project and we settled on a ASP.NET REST API which should be hosted on Azure. Since none of us has any experience on deployment on Azure (or any other cloud service), I have two questions.
Do you need separate Azure Services for the Database and the API, or might there be a combined "package" for the prototype, which later can be changed easily?
Is there any documentation or are there any examples of the entire deployment process of a simple dummy API and the DB? I have spent the last few hours reading the official documentation and searching around, but I would really love to see some sort of reference, just to ensure I don't miss something.
For now, the best I have found is this and this. This seems rather shallow, so I really hope, that there might be more.
If you're looking for in-depth design an implementation details then I would suggest that the Azure Architecture Center would be an excellent place to start, for hands on experience there are hundreds of free courses available on Microsoft Learn.
Specifically there are sections on API design and API implementation. From the Serverless web application page is:
If you don't need all of the functionality provided by API Management, another option is to use Functions Proxies. This feature of Azure Functions lets you define a single API surface for multiple function apps, by creating routes to back-end functions. Function proxies can also perform limited transformations on the HTTP request and response. However, they don't provide the same rich policy-based capabilities of API Management.
Function Proxies
I would suggest starting with using Azure Functions for your API (you only pay for the number of calls + a combination of CPU, memory, and runtime, but the first 1,000,000 calls per month are free (consumption plan), rather than paying for an Azure App Service to host your API and run all the time but only be utilized some of the time.
Some links that might help:
Build Serverless APIs with Azure Functions
Customize an HTTP endpoint in Azure Functions
There is an excellent summary in this article that states:
For the heavy workloads.
Private(enterprise) API - API Management with a Premium plan.
Public API - Functions Proxy with the Premium plan.
For light/moderate workloads.
Private API -Functions Proxy with the Premium plan.
Public API -Functions Proxy with a Consumption plan and custom warm-up solution.
Then from here you can use a connection string to an Azure SQL DB inside your functions to write to the DB or something like Azure Managed Identity (yes the link is for Azure PostgreSQL but the process will be much the same for Azure SQL).
In terms of deployment you should be looking at using Azure DevOps (or GitHub Actions):
Setting up a CI/CD pipeline for Azure functions (old way - GUI pipelines)
Deploy an Azure Functions app to Azure (new way - YAML pipelines)
Continuous Delivery for Azure SQL DB using Azure DevOps
Another helpful tool to get a gauge of costs is the Azure Pricing Calculator.

Pact http vs https endpoint testing

Trying to write my first pact tests and I am not able to find answers to basic questions. Does consumer test and provider tests runs against mock servers only or should we be building our application locally (or on specific environment during CI/CD) and then run test against actual running application? Also, Is it possible for me to run consumer test against mock server and run provider tests against actual https endpoint?
This page answers all of those questions: https://docs.pact.io/getting_started/how_pact_works
Does consumer test and provider tests runs against mock servers only or should we be building our application locally (or on specific
environment during CI/CD) and then run test against actual running
application?
Consumer test: runs against a mock provider.
Provider test: pact simulates the consumer against a provider (see [3])
Is it possible for me to run consumer test against mock server
The consumer test must run against a Pact mock service, because that's how we record what you did and check that your requests match what's documented.
and run provider tests against actual https endpoint?
You can do this sort of "black box" approach, but better to run against a locally running provider that you can control.

.Net Core querying records from different microservices

I'm learning how to design and implement microservices using serverless technologies. I'm trying to create autonomous microservices and am having difficulty understanding how to communicate data across/between microservices. I'm using .Net Core for my microservices and am wanting each microservice to be a AWS lambda function exposed via API Gateway.
I have the following scenario:
Institution microservice - returns a list of institutions within a radius (25 miles) of a zipcode.
ROI Calculator microservice - receives a zip code as input and calls institution microservice receiving a list of institutions. For each institution returned, perform a series of calculations yielding a ROI value.
How should ROI Calculator microservice make a call to institution microservice?
ASP.NET core web api application can be published as it is on AWS Lambda as Serverless function. You get everything that regular .NET core application provides like controllers , models etc. Amazon API gateway proxy is integrated directly into .NET Core api routing system. So your AWS lambda function will be serving your .Net core web api. You should watch this tutorial for starters to get better understanding.
Create .NET Core AWS lambda function
.NET core AWS Lambda Microservices
If you go by template provided by AWS SDK (ASP.NET core web api template) and you publish .Net core web api on AWS it will configure everything for you including AWS Lambda function and API gateway. So if you have create 2 .net core web api projects you will have 2 web api gateways. The problem is if we have 10 microservices mean we will have 10 api gateways , so we should ideally have 1 api gateway for multiple microservices.
I have worked on POC recently that has one API gateway and all microservices AWS lambda functions are behind this. Each microservice has base path e.g. shopping or users setup in their startup.cs that will identify them individually behind apigateway. so microservice 1 will be apigateway/shopping/{anything} , another microservice will be apigateway/users/{anything} and they both are configured behind api gateway. API Gateway will send request to AWS lambda function (.Net core web api) and this request will be resolved by .Net core routing system. Even multiple controller can be used this way in a single web api project without problem.
I have modified serverless.template so we can only publish aws lambda function and configure apigateway seperatley. You can find code sample and details on my github blog here .NET Core Web API AWS Lambda Microservices Sample .
There are two ways of doing this depending on your independance of the microservices is probably the best answer:
Make a internal HTTP call from the ROI -> Institution which would be okay. The problem with this is that if the service is down the data will not be available.
Store the data needed to make the calculation inside the ROI service as well. This seems strange but the data once created in say the Institution service it could be sent via a message bus to the ROI service which then uses the data when needed. (this however may not suit your domain, it depends what information it needs).
However it seems that the calculation and the storage of the Institutions could be within the same microservice therefore eleminating the problem.

Cloudfoundry actuator endpoints in a .net core console application hosted by GenericHost

I have a question about CloudFoundry actuator endpoints in a .net core console application using Steeltoe. I am planning to use generic host https://jmezach.github.io/2017/10/29/having-fun-with-the-.net-core-generic-host/ to perform some background task. I would like to use few actuator endpoints e.g.Health actuator. I could find samples with WebHost here https://github.com/SteeltoeOSS/Samples/blob/dev/Management/src/AspDotNetCore/CloudFoundry/Startup.cs. The below code needs IApplicationBuilder
// Add management endpoints into pipeline
app.UseCloudFoundryActuators();
So it is possible to use actuator endpoints in a console application which is hosted by generic host. Any samples are most welcome. Thanks in advance.
Steeltoe 3.0 added a variety of extensions for using Management/Actuators with HostBuilder. Check out this file for some examples.
It would also be possible to add another implementation of the actuator endpoints that communicate over something other than HTTP (perhaps RabbitMQ as an example), Steeltoe Management was built with portable base endpoint functionality in mind - but then you would also need a new application for interacting with them, and you wouldn't be able to use them with Apps Manager on PCF.

Resources