Why does Spring Cloud Contract stub runner have local and remote attributes? - spring-cloud-contract

The Spring Cloud Contract docs say
"Use the REMOTE stubsMode when downloading stubs from an online repository and LOCAL for Offline work".
Why does Spring Cloud Contract stub runner need local and remote attributes?
I would expect instead that it should respect the normal Maven lifecycle... If I do a mvn clean install on the contract module it should publish locally. If I do a mvn clean deploy there, it should publish to my remote. Same for the test verifier... If there is a copy of the binaries in my local repo use that. Otherwise pull it from remote
So I am not getting why we have to include local and remote in the stub runner.
This also seems dangerous because you might accidentally check in code with local when you meant to change it to remote on the build server

Why does Spring Cloud Contract stub runner need local and remote attributes?
We've described it in the docs that you quote. When you work offline, then you want to automatically pick the stubs from your local .m2. Otherwise you want to pick it from a different location.
I would expect instead that it should respect the normal Maven lifecycle... If I do a mvn clean install on the contract module it should publish locally. If I do a mvn clean deploy there, it should publish to my remote. Same for the test verifier... If there is a copy of the binaries in my local repo use that. Otherwise pull it from remote
You're mixing stub runner with verifier. When you're on the producer side, you're using the Spring Cloud Contract verifier and it follows the maven lifecycle fully. That's because we produce a stub jar and we attach it to the standard maven flow. With Stub Runner, it's completely unrelated to your maven flow.
This also seems dangerous because you might accidentally check in code with local when you meant to change it to remote on the build server
If you check in code with local then indeed you can have a false positive. That's why you should take care of what you're doing. When you're on the consumer side and doing ./mvnw clean install/deploy then Stub Runner just follows your test setup. If in the test setup you've messed up your configuration then Stub Runner can't do much about it.

Related

How to deploy dotnetcore/react -au Individual to Azure

If you install the dotnetcore3 SDK and create the dotnetcore/react project, it compiles and runs fine. Modifications to use external identity providers are straightforward and work as documented. You will need to add packages for the providers you wish to support, such as Microsoft.AspNetCore.Authentication.MicrosoftAccount.
At this point you might try dotnet publish but the resultant package produces the following (truncated) stack trace:
info: IdentityServer4.Startup[0]
Starting IdentityServer4 version 3.0.0.0
crit: Microsoft.AspNetCore.Hosting.Diagnostics[6]
Application startup exception
System.InvalidOperationException: Key type not specified.
at Microsoft.AspNetCore.ApiAuthorization.IdentityServer.ConfigureSigningCredentials.LoadKey()
at Microsoft.AspNetCore.ApiAuthorization.IdentityServer.ConfigureSigningCredentials.Configure(ApiAuthorizationOptions options)
Service Worker
The template is set up with a service worker. This is a jolly nuisance while debugging our configuration so turn it off by commenting out registerServiceWorker(); in ClientApp/src/index.js and if you've already run the app then you will need to flush your cache to dislodge it.
Certificate
A certificate is required. The project template uses OIDC implemented with IdentityServer4, and therefore requires a PFX. On Windows you can create one of these using CertReq. It would be poor security practice to add this to the project so I made the PFX file sibling to the project folder. The registration in appSettings.json looks like this:
"IdentityServer": {
"Key": {
"Type": "File",
"FilePath": "../cert-name.pfx",
"Password": "cert-password"
}
},
Secrets
dotnet add secret is strictly a development mode thing. We are expected to manually transcribe all the secrets to Azure environment variables and modify the program to include them in its configuration loading process.
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddEnvironmentVariables();
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
The names in dotnet secrets are full of colons. You'll also need to escape these colons as double underscores for cross platform compatibility.
Since dotnet add secret isn't exactly the most convenient tool ever it occurs to me that it might be less bother to just use environment variables all the way through.
Core version madness
Silly me trying to use the LTS version (3.1).
Creating a Classic CI pipeline from the Azure portal, it is impossible to select dotnet core 3.1 because it's not in the list. The list does contain LTS and Latest but both of these selections produce validation errors when you try to finalise the deployment. Choosing 3.0 allows finalisation which results in the deployment running but although it manages to publish to the Web App on Azure, the Web App is set to dotnet core 3.0 and since the project specifies 3.1 it won't start.
You can manually change this in the Web App Configuration blade in the Azure portal, but it just gets mangled on every deployment. Changing the project to use 3.0 and compatible packages seems to work.
Am I using the tools incorrectly, or is the Azure CICD set up really crap?
npm
And now it starts but can't find 'npm'. Installing npm using ssh looks like this (it's already root so sudo is not involved)
curl -sL https://deb.nodesource.com/setup_13.x | bash
apt-get install -y nodejs
and this seems to work but it doesn't survive a restart of the Web App (presumably it is installed outside /home)
Everything works without auth
If I deploy a project created with dotnet new react without the -au Individual qualifier, it works perfectly. The site loads, the web APIs are called, the data returns etc.
What's the difference? There are a couple.
IdentityServer4
SQLite
Generation of the SQLite database
Rummaging in the .csproj I find this
<PackageReference Include="Microsoft.EntityFrameworkCore.Sqlite" Version="3.0.0" />
<PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="3.0.0">
and this is the first thing used in ConfigureServices
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));
but this doesn't trigger the exception. That occurs later when IdentityServer is created. Further up the stack trace we find this:
Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore
.MigrationsEndPointMiddleware.Invoke(HttpContext context)
from which I conclude that EF uses Node to do migration, at least for SQLite.
I don't think it would help to add npm to package.json because that would just bundle it for delivery to the browser. It appears that npm is required at the server for the migration process.
But Node and npm are simply not part of the dotnet core Web App stack.
One suggestion (via Reddit) is to use a Node stack Web App and deploy a self-contained build of the dotnet core server code. This is my next port of call. In the spirit of solving one problem at a time I shall first learn to do self-contained build deployment with a minimal Core/React project (no auth).
This almost works. Using SSH I was able to run the app and it started without throwing any errors but listened on port 5000 rather than 8080 which is where it needs to be if you want it surfaced on port 80 on the public interface.
On the Node stack, the startup script is unsurprisingly configured to to launch a Node app, and it barfs before it gets to the startup command you supply. Because it's a Node startup script it also doesn't set up ASPNETCORE_URLS=http://*:$PORT which is required to make the core project serve on port 8080.
Taking a step back, npm is a development thing. Why would anyone deliberately introduce it as a production dependency? That would be crazy, it would create mayhem.
The key word in that question is "deliberately". What if it weren't deliberate? How could you do it accidentally? Well, you could write a script to gather all your environment variables and plonk them into Azure, and this might capture ASPNETCORE_ENVIRONMENT=Development
Lo and behold, there it was. Deleting it restarts the app and HURRAH! no more demands for NPM. Looks like the stack isn't broken after all. Which makes me a happy camper since I didn't want to give up CICD.
This could also be defined in appsettings.json.
The important takeaway is that if you see demands for npm after deployment to Azure, your app is trying to run in development mode in a production environment.

Run integration test from IDE and cmd line fails with JHipster jwt secret empty

I'm running a micro service architecture with a problem on running some of the integration tests.
Running JHipster 5.0.2 on Mac against MySQL db.
LogsResourceIntTest is one example (generated by Jhipster with no modifications).
The following code aborts with a NP
this.secretKey = encoder.encodeToString(jHipsterProperties.getSecurity().getAuthentication().getJwt()
.getSecret().getBytes(StandardCharsets.UTF_8));
I have debugged it and the properties for the timeouts are set, but the token (secret) is empty.
Token is set in my /src/test/resource/application-test.yml file.
Running the test from cmd line also aborts with a NP. Running tests as follows:
./mvnw clean install -Dprofile=test
Any pointers on how to solve this problem
There's no such "test" profile in JHipster, so it can't be "generated by JHipster with no modifications". Using a profile that does not exist, you get properties from default profile.
Properties for tests are read from src/test/resources/config/application.yml because of test classpath.
Found the problem in the yml file.
Updated the file with some of my application's properties and had a format error.
I have used "test" profile" on other SpringBoot apps to help identify which files must be included for configuration. Reverted all back to default and fixed error in yml.
Will look at another way of identifying specific config files. The unit tests needs some of the Spring beans, but the integration test needs all (most of them).
Used the maven surefire and failsafe to split them before. I'm integrating to Google Pub/Sub and don't need all of that configured when doing unit tests.
Thank you for the help.

Spring Cloud Contract generating stubs and a standalone server with the same stubs

This question feels a bit strange, but here it goes:
I know I can use SCC in unit tests because I can access the stubs it creates.
But the question is, from the same stubs can I configure a standalone server that could run in some DEV server, lets say for some manual testing or for some Selenium testing of the frontend app that will ultimately use those stubs?
Have you read the docs? You can use the Stub Runner Boot application. You can read about it here https://cloud.spring.io/spring-cloud-static/Finchley.RELEASE/single/spring-cloud.html#_stub_runner_boot_application and about its Docker version here https://cloud.spring.io/spring-cloud-static/Finchley.RELEASE/single/spring-cloud.html#stubrunner-docker
UPDATE:
Updating links for Hoxton.SR1 release train (Spring Cloud Contract 2.2.1.RELEASE):
Stub Runner Boot: https://cloud.spring.io/spring-cloud-static/spring-cloud-contract/2.2.1.RELEASE/reference/html/project-features.html#features-stub-runner-boot
Stub Runner Docker: https://cloud.spring.io/spring-cloud-static/spring-cloud-contract/2.2.1.RELEASE/reference/html/docker-project.html

Is it possible to test consumer side without stub runner in spring-cloud-contract

Currently I want to test the error handling of calling other micro services in consumer side via spring cloud contract. But there are some troubles blocking me to create stubs in provider side due to it's difficult to share build artifacts in docker CI build.
I'm wondering if possible to just create groovy or yaml contacts in consumer side then using them by wiremock server?
There are ways to achieve it. One, is to clone the producer's code, run ./mvnw clean install -DskipTests or ./gradlew publishToMavenLocal -x test and have the stubs installed without running any tests. Another option is to write your own StubDownloaderBuilder (for Finchley) that will fetch the contracts via Aether as the AetherStubDownloader does, but then will also automatically convert the contracts to WIreMock stubs.
Of course both approaches are "cheating". You shouldn't use the stubs in your CI system until the producer has actually published the stubs.
Maybe instead of hacking the system it's better to analyze
provider side due to it's difficult to share build artifacts in docker CI build.
and try to fix it? Why is it difficult? What exactly is the problem?

How to use Sonar+JaCoCo to measure line coverage using integration tests (manual+automated)

I am trying to do line coverage analysis of a java based application. Found many resources on the internet on how to use Sonar+JaCoCo plugin to get line coverage results, and it looks very promising. However, I couldn't get a full clarity on how to go about implementing this solution.
More about my project:
There is a service being called by a website. The service is java based, and is built using maven.
There is also a selenium based test suite that is run on website (which makes calls to the above mentioned service at several instances). The test suite is built & invoked by Ant.
The code base for the service and the code base for the tests are at different locations on the same host.
I need to generate coverage report for the service based on the integration test suite.
The resources I went through are:
http://www.sonarsource.org/measure-coverage-by-integration-tests-with-sonar-updated/
http://www.eclemma.org/jacoco/trunk/doc/ant.html
Even after going through all of these, I am not sure where to put jacoco-agent.jar, whether to make jacoco a part of maven (service's build process) or ant (tests' build process), how to invoke jacoco agent, where to specify the source repository(service's code base) and test repository locations.
I have tried blind permutations of all of the above, but either the maven build or the ant build starts failing as soon as I add jacoco tasks to them.
Can someone please help me out in this? I need to understand the exact steps to follow to get it done.
When you execute your server process for the test mode, you need to ensure that jacoco agent is setup on the classpath. The jacoco agent will then effectively listen and record details of the code covered for the life time of the JVM.
You then execute your client side selenium tests which will invoke the server. The jacoco agent in this case will record details of the code executed as part of your tests. When the client test finishes, you need to shutdown your server process which should result in a jacoco coverage file.
The final step is to generate a jacoco html report based on your coverage report. I might suggest you look into moving your ANT based selenium tests into your maven pom, since then it will be easier to control the order of test execution.

Resources