Spring cloud contract : Unable to read the stubjar from Jfrog in consumer side code - spring-cloud-contract

I am new to Spring cloud contract testing. I am to publish the stub jar to Jfrog artifactory from the producer side.
Following Gradle code:
publishing {
publications {
maven(MavenPublication) {
artifact("build/libs/provider-service-$version"+"-stubs.jar") {
extension 'jar'
}}}
repositories {
maven {
name 'libs-snapshot'
url "http://localhost:8081/artifactory/libs-snapshot/"
credentials {
username project.repoUser
password project.repoPassword
}
}
}
}
But on the consumer side I am not able to read the jar. I am getting the following error to read the jar.
Code:
#AutoConfigureStubRunner(ids = "com.test:provider-service:+:stubs:8082",
consumerName = "contracts",
properties = {"stubrunner.username=admin", "stubrunner.password=Cirrus123$"},
stubsPerConsumer = true,
stubsMode = StubRunnerProperties.StubsMode.REMOTE,
repositoryRoot = "http://localhost:8081/artifactory/libs-snapshot/")
Error:
java.lang.IllegalArgumentException: For groupId [com.test] artifactId [provider-service] and classifier [stubs] the version was not resolved! The following exceptions took place [org.eclipse.aether.transfer.MetadataNotFoundException: Could not find metadata com.test:provider-service/maven-metadata.xml in local (C:\Users\test\AppData\Local\Temp\aether-local7525112400154924089), org.eclipse.aether.transfer.MetadataTransferException: Could not transfer metadata com.test:provider-service/maven-metadata.xml from/to remote0 (http://localhost:8081/artifactory/libs-snapshot/): status code: 401, reason phrase: Unauthorized (401)]
But I am using the correct credentials to connect to Jfrog

Related

How to share proto files between client and server

I have setup a Grpc service that contains two proto files (One with custom types and one for the service).
customtypes.proto
syntax = "proto3";
package CustomTypes;
option csharp_namespace = "CustomTypes";
// Example: 12345.6789 -> { units = 12345, nanos = 678900000 }
message Decimal {
// Whole units part of the amount
int64 units = 1;
// Nano units of the amount (10^-9)
// Must be same sign as units
sfixed32 nanos = 2;
}
greet.proto
syntax = "proto3";
option csharp_namespace = "Test";
import "Protos/customtypes.proto";
package greet;
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply);
}
// The request message containing the user's name.
message HelloRequest {
CustomTypes.Decimal value = 1;
}
// The response message containing the greetings.
message HelloReply {
CustomTypes.Decimal value = 1;
}
The server side project compiles fine, but when i add both proto files to the client project with add connected service (in visual studio), it generates the following in my .csproj file
<ItemGroup>
<Protobuf Include="..\Server\Protos\customtypes.proto" GrpcServices="Client">
<Link>Protos\customtypes.proto</Link>
</Protobuf>
<Protobuf Include="..\Server\Protos\greet.proto" GrpcServices="Client">
<Link>Protos\greet.proto</Link>
</Protobuf>
</ItemGroup>
But when i try to compile it fails to find the the Decimal custom type.
1>------ Build started: Project: Client, Configuration: Debug Any CPU ------
1>Protos/customtypes.proto : error : File not found.
1>../Server/Protos/greet.proto(4,1): error : Import "Protos/customtypes.proto" was not found or had errors.
1>../Server/Protos/greet.proto(16,3): error : "CustomTypes.Decimal" is not defined.
1>../Server/Protos/greet.proto(21,3): error : "CustomTypes.Decimal" is not defined.
1>Done building project "Client.csproj" -- FAILED.
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
I have setup a github project that contains everything that is needed https://github.com/AnderssonPeter/TestGRPC/tree/02779b47fda3128483698c628b4db05bc6b57c75
So how should i setup a project to share the proto files, the only solution i can think of is to create a common project and generate both client and server types, but it seems a bit wasteful to have the server types in the client project!

Quarkus - Reactive file download

Using Quarkus, can somebody give an example on how the server and client side code using a reactive API to download a file over http looks?
So far I tried to return a Flux of nio ByteBuffers but it seems not to be supported:
#RegisterRestClient(baseUri = "http://some-page.com")
interface SomeService {
// same interface for client and server
#GET
#Produces(MediaType.APPLICATION_OCTET_STREAM)
#Path("/somePath")
fun downloadFile(): reactor.core.publisher.Flux<java.nio.ByteBuffer>
}
Trying to return a Flux on the server-side results in the following exception:
ERROR: RESTEASY002005: Failed executing GET /somePath
org.jboss.resteasy.core.NoMessageBodyWriterFoundFailure: Could not find MessageBodyWriter for response object of type: kotlinx.coroutines.reactor.FlowAsFlux of media type: application/octet-stream
at org.jboss.resteasy.core.ServerResponseWriter.lambda$writeNomapResponse$3(ServerResponseWriter.java:124)
at org.jboss.resteasy.core.interception.jaxrs.ContainerResponseContextImpl.filter(ContainerResponseContextImpl.java:403)
at org.jboss.resteasy.core.ServerResponseWriter.executeFilters(ServerResponseWriter.java:251)
...
Here is an example how to start reactive file download with smallrye mutiny. Main function is getFile
#GET
#Path("/f/{fileName}")
#Produces(MediaType.APPLICATION_OCTET_STREAM)
public Uni<Response> getFile(#PathParam String fileName) {
File nf = new File(fileName);
log.info("file:" + nf.exists());
ResponseBuilder response = Response.ok((Object) nf);
response.header("Content-Disposition", "attachment;filename=" + nf);
Uni<Response> re = Uni.createFrom().item(response.build());
return re;
}
You can test in your local with mvn quarkus:dev and go to this url to see what files are there http://localhost:8080/hello/list/test and after that you can call this url to start download http://localhost:8080/hello/f/reactive-file-download-dev.jar
I did not check about Flux(which looks like more spring then quarkus), feel free to share your thoughts. I am just learning and answering/sharing.
As of this commit, Quarkus has out-of-the-box support for AsyncFile. So, we can stream down a file by returning an AsyncFile instance.
For example, in a JAX-RS resource controller:
// we need a Vertx instance for accessing filesystem
#Inject
Vertx vertx;
#GET
#Path("/file-data-1")
#Produces(MediaType.TEXT_PLAIN)
public Uni<Response> streamDataFromFile1()
{
final OpenOptions openOptions = (new OpenOptions()).setCreate(false).setWrite(false);
Uni<AsyncFile> uni1 = vertx.fileSystem()
.open("/srv/texts/hello.txt", openOptions);
return uni1.onItem()
.transform(asyncFile -> Response.ok(asyncFile)
.header("Content-Disposition", "attachment;filename=\"Hello.txt\"")
.build());
}

Corda - Failed to find a store at certificates\sslkeystore.jks

Corda open source on Linux. Node RPC SSL enabled. I am getting error "Failed to find a store at certificates\sslkeystore.jks". Any ideas? I have entered absolute path in keyStorePath.
You must follow the steps of this paragraph: https://docs.corda.net/clientrpc.html#wire-security which I detailed for you below.
When you enable RPC SSL, you must run this command one time (you will be asked to supply 2 new passwords):
java -jar corda.jar generate-rpc-ssl-settings
It will create the rpcsslkeystore.jks under certificates folder, and rpcssltruststore.jks under certificates/export folder.
Inside your node.conf supply the path and password of rpcsslkeystore.jks:
rpcSettings {
useSsl=true
ssl {
keyStorePath=${baseDirectory}/certificates/rpcsslkeystore.jks
keyStorePassword=password
}
standAloneBroker = false
address = "0.0.0.0:10003"
adminAddress = "0.0.0.0:10004"
}
Now if you have a webserver, inside NodeRPCConnection you must use the constructor that takes a ClientRpcSslOptions parameter:
// RPC SSL properties.
#Value("${config.rpc.ssl.truststorepath}")
private String trustStorePath;
#Value("${config.rpc.ssl.truststorepassword}")
private String trustStorePassword;
#PostConstruct
public void initialiseNodeRPCConnection() {
NetworkHostAndPort rpcAddress = new NetworkHostAndPort(host, rpcPort);
ClientRpcSslOptions clientRpcSslOptions = new ClientRpcSslOptions(Paths.get(trustStorePath),
trustStorePassword, "JKS");
CordaRPCClient rpcClient = new CordaRPCClient(rpcAddress, clientRpcSslOptions, null);
rpcConnection = rpcClient.start(username, password);
proxy = rpcConnection.getProxy();
}
We added above 2 extra attributes that you must now supply when starting the webserver, for that; modify your clients module build.gradle:
task runNodeServer(type: JavaExec, dependsOn: jar) {
classpath = sourceSets.main.runtimeClasspath
main = 'com.example.server.ServerKt'
args '--server.port=50005', '--config.rpc.host=localhost',
'--config.rpc.port=10005', '--config.rpc.username=user1', '--config.rpc.password=test',
'--config.rpc.ssl.truststorepath=/path-to-project/build/nodes/your-node/certificates/export/rpcssltruststore.jks',
'--config.rpc.ssl.truststorepassword=password'
}
If you're planning to connect to the node with a standalone shell, you must do something similar, but it didn't work for me; I reported the following bug: https://github.com/corda/corda/issues/5955

Terraform Provisioner "local-exec" not working as expected | VPC Peering Connection Accept issue

I'm unable to get the auto accept peering done through the work around mentioned in the link (Why am I getting a permissions error when attempting to auto_accept vpc peering in Terraform?"] via provisioner option
See below Terraform code of mine. Can some one help me out?
provider "aws" {
region = "us-east-1"
profile = "default"
}
provider "aws" {
region = "us-east-1"
profile = "peer"
alias = "peer"
}
data "aws_caller_identity" "peer" {
provider = "aws.peer"
}
resource "aws_vpc_peering_connection" "service-peer" {
vpc_id = "vpc-123a56789bc"
peer_vpc_id = "vpc-YYYYYY"
peer_owner_id = "012345678901"
peer_region = "us-east-1"
accepter {
allow_remote_vpc_dns_resolution = true
}
requester {
allow_remote_vpc_dns_resolution = true
}
provisioner "local-exec" {
command = "aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id=${aws_vpc_peering_connection.service-peer.id} --region=us-east-1 --profile=peer"
}
}
Output I'm getting:
Error: Error applying plan:
1 error(s) occurred:
* aws_vpc_peering_connection.servicehub-peer: 1 error(s) occurred:
* aws_vpc_peering_connection.servicehub-peer: Unable to modify peering options. The VPC Peering Connection "pcx-08ebd316c82acacd9" is not active. Please set `auto_accept` attribute to `true`, or activate VPC Peering Connection manually.
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure
Where as I'm able to run the aws cli command successfully via linux shell, outside the terraform template. Let me know if I'm missing out something in the terraform script.
Try with moving out your "local-exec" and add depends on link with your VPC peering.
resource "null_resource" "peering-provision" {
depends_on = ["aws_vpc_peering_connection.service-peer"]
provisioner "local-exec" {
command = "aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id=${aws_vpc_peering_connection.service-peer.id} --region=us-east-1 --profile=peer"
}
}
As said Koe it's may be better to use auto_accept option.

Is Azure Function to Function authentication with MSI supported

I created 2 Azure Function Apps, both setup with Authentication/Authorization so an AD App was created for both. I would like to setup AD Auth from one Function to the other using MSI. I setup the client Function with Managed Service Identity using an ARM template. I created a simple test function to get the access token and it returns: Microsoft.Azure.Services.AppAuthentication: Token response is not in the expected format.
try {
var azureServiceTokenProvider = new AzureServiceTokenProvider();
string accessToken = await azureServiceTokenProvider.GetAccessTokenAsync("https://myapp-registration-westus-dev.azurewebsites.net/");
log.Info($"Access Token: {accessToken}");
return req.CreateResponse(new {token = accessToken});
}
catch(Exception ex) {
log.Error("Error", ex);
throw;
}
Yes, there is a way to do this. I'll explain at a high level, and then add an item to the MSI documentation backlog to write a proper tutorial for this.
What you want to do is follow this Azure AD authentication sample, but only configure and implement the parts for the TodoListService: https://github.com/Azure-Samples/active-directory-dotnet-daemon.
The role of the TodoListDaemon will be played by a Managed Service Identity instead. So you don't need to register the TodoListDaemon app in Azure AD as instructed in the readme. Just enable MSI on your VM/App Service/Function.
In your code client side code, when you make the call to MSI (on a VM or in a Function or App Service), supply the TodoListService's AppID URI as the resource parameter. MSI will fetch a token for that audience for you.
The code in the TodoListService example will show you how to validate that token when you receive it.
So essentially, what you want to do is register an App in Azure AD, give it an AppID URI, and use that AppID URI as the resource parameter when you make the call to MSI. Then validate the token you receive at your service/receiving side.
Please check that the resource id used "https://myapp-registration-westus-dev.azurewebsites.net/" is accurate. I followed steps here to setup Azure AD authentication, and used the same code as you, and was able to get a token.
https://learn.microsoft.com/en-us/azure/app-service/app-service-mobile-how-to-configure-active-directory-authentication
You can also run this code to check the exact error returned by MSI. Do post the error if it does not help resolve the issue.
HttpClient client = new HttpClient();
client.DefaultRequestHeaders.Add("Secret", Environment.GetEnvironmentVariable("MSI_SECRET"));
var response = await client.GetAsync(String.Format("{0}/?resource={1}&api-version={2}", Environment.GetEnvironmentVariable("MSI_ENDPOINT"), "https://myapp-registration-westus-dev.azurewebsites.net/", "2017-09-01"));
string msiResponse = await response.Content.ReadAsStringAsync().ConfigureAwait(false);
log.Info($"MSI Response: {msiResponse}");
Update:-
This project.json file and run.csx file work for me. Note: The project.json refers to .NET 4.6, and as per Azure Functions documentation (link in comments), .NET 4.6 is the only supported version as of now. You do not need to upload the referenced assembly again. Most probably, incorrect manual upload of netstandard assembly, instead of net452 is causing your issue.
Only the .NET Framework 4.6 is supported, so make sure that your
project.json file specifies net46 as shown here. When you upload a
project.json file, the runtime gets the packages and automatically
adds references to the package assemblies. You don't need to add #r
"AssemblyName" directives. To use the types defined in the NuGet
packages, add the required using statements to your run.csx file.
project.json
{
"frameworks": {
"net46":{
"dependencies": {
"Microsoft.Azure.Services.AppAuthentication": "1.0.0-preview"
}
}
}
}
run.csx
using Microsoft.Azure.Services.AppAuthentication;
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
try
{
var azureServiceTokenProvider = new AzureServiceTokenProvider();
string accessToken = await azureServiceTokenProvider.GetAccessTokenAsync("https://vault.azure.net/");
log.Info($"Access Token: {accessToken}");
return req.CreateResponse(new {token = accessToken});
}
catch(Exception ex)
{
log.Error("Error", ex);
throw;
}
}

Resources