Confused about health checking protocol - grpc

I have read below doc, source code and issue:
https://github.com/grpc/grpc/blob/master/doc/health-checking.md
https://github.com/grpc/grpc-node/blob/master/packages/grpc-health-check/test/health_test.js
https://github.com/grpc/grpc/issues/10428
I provide an example and try to explain:
// Import package
let health = require('grpc-health-check');
// Define service status map. Key is the service name, value is the corresponding status.
// By convention, the empty string "" key represents that status of the entire server.
const statusMap = {
"ServiceFoo": proto.grpc.health.v1.HealthCheckResponse.ServingStatus.SERVING,
"ServiceBar": proto.grpc.health.v1.HealthCheckResponse.ServingStatus.NOT_SERVING,
"": proto.grpc.health.v1.HealthCheckResponse.ServingStatus.NOT_SERVING,
};
// Construct the service implementation
let healthImpl = new health.Implementation(statusMap);
// Add the service and implementation to your pre-existing gRPC-node server
server.addService(health.service, healthImpl);
I am not clear about the following points:
Does the service name in statusMap need to be the same as the service name in the protocol buffers file? Or the service name can be arbitrarily specified. If so, how does the service name map to the service defined in the protocol buffers?
From the health checking protocol:
The server should register all the services manually and set the individual status
Why do we need to register manually? If the service code can be generated, why doesn't grpc help us automatically register the service name in statusMap? (Imagine setting the status of 100 services one by one)
The service status is hard code and cannot be changed at application runtime. If my service is unavailable at runtime for some reason such as misconfiguration, downstream service is not available, but the status of the service is always serving(because it's hard code), if so, what is the meaning of the health check?
For RESTful API, we can provide a /health-check or /ping API to check that the entire server is running normally.

Regarding the service names, the first linked document says this:
The suggested format of service name is package_names.ServiceName, such as grpc.health.v1.Health.
This does correspond to the package names and service name defined in the Protobuf definition.
The services need to be registered "manually" because the status is determined at the application level, which the grpc library does not know about, and a registered service name is only meaningful along with the corresponding status. In addition, the naming format mentioned above is just a convention; the health check service user is not constrained to it, and the actual services on the server are not constrained to use the standard /package_names.ServiceName/MethodName method naming scheme either.
Regarding the third point, the service status should not be hardcoded, and can be changed at runtime. The HealthImplementation class used in the code in the question has a setStatus method that can be used to update the status.
Also, as mentioned in a comment in the code in the question,
By convention, the empty string "" key represents that status of the entire server.
That can be used as the equivalent of the /health-check or /ping REST APIs.

Related

Can we use standalone Spring Cloud Schema Registry with Confluent's KafkaAvroSerializer?

I have a project using Spring cloud stream with Kafka Streams binder. For the output of a stream, I am using Avro, with the Serde provided by Confluent(io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde).
I am able to use it with the Confluent Schema Registry. Serialization and Deserialization takes place correctly.
However, I wanted to see if we can use the Spring Cloud Schema Registry Server instead of the Confluent one. I configured a standalone Schema Registry server and set the schema registry in my project to it (changed the schemaRegistryClient.endpoint and schema.registry.url properties).
When I tried it out, it seems Spring Cloud is able to work with the standalone server. It registers the schema available in the resources folder as a .avsc file. However, when I send a message, it seems the Confluent serializer continues to approach it as a Confluent Schema Registry (which has different REST endpoints from Spring Schema Registry). As a result, it gets a 405 response code.
We get the following exception(partial stack-trace)
org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: <my-avro-schema>
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Unexpected character ('<' (code 60)): expected a valid value (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (sun.net.www.protocol.http.HttpURLConnection$HttpInputStream); line: 1, column: 2]; error code: 50005
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:230)
It seems to me that there are two possibilities:
Spring Schema Registry Server can work only with the content-type provided by Spring (specified as content-type: application/*+avro) and not with the native Serde provided by Confluent, or
There is an issue with the project configuration.
Can someone help me figure out which one is it? If it is the second one, can someone point out what is wrong?
Each schema registry provider requires a proprietary SerDe library. For example, if you would like to integrate AWS Glue Schema Registry with Kafka, then you would need Amazon's SerDe stuff. Hence, the Confluent's SerDe library expects Confluent's Schema Registry at the address specified in the schema.registry.url property.

Mock real gRPC server responses

We have a microservice that needs to be integration tested (real calls, but no network communication with anything outside of the test namespace in kubernetes) in our pipeline. It also relies on an external gRPC server which we have no control over.
Above is a picture of what we'd like to have happen. The white box on the left is code that provides the Microservice Boundary with 'external' data. It then keeps calling the Code via REST until it gets back the proper number of records or it times out. The Code pulls records from an internal database, as well as data associated to those records from a gRPC call. Since we do not own the gRPC service, but are doing integration tests, we need a few pre-defined responses to the two gRPC services we call (blue box).
Since our integration tests are self-contained right now, and we don't want to write an entirely new actual gRPC server implementation just to mimick calls, is there a way to stand up a real gRPC server and configure it to return responses? The request is pretty much like a mock setup, except with an actual server.
We need to be able to:
give the server multiple proto files to interpret and have it expose those as endpoints. Proto files must be able to have different package names
using files we can store in source control, configure the responses to each call
able to run in a linux docker container (no windows)
I did find gripmock which seemed almost exactly what we need, but it only serves one proto file per container. It supposedly can serve more than one, but I can't get it to work and their example that serves two files implies each proto file must have the same package name which will likely never happen with our scenarios. In the meantime we are using it, but if we have 10 gRPC call dependencies, we now have to run 10 gripmock servers.
Wikipedia contains a list of API mocking tools. Looking at that list today there is a commercial tool that supports gRPC called Traffic Parrot which allows you to create gRPC mocks based on your Proto files. You can give it multiple proto files, store the mocks in Git and run the tool in Docker.
There are also open-source tools like GripMock but it does not generate stubs based on Proto files, you have to create them manually. Also, the project up to today was not keeping up to date with Proto and gRPC developments i.e. the package name issue you have discovered yourself above (works only if the package names in different proto files are the same). There are a few other open-source tools like grpc-wiremock, grpc-mock or bloomrpc-mock but they still lack widespread adoption and hence might be risky to adopt for an important enterprise project.
Keep in mind, the mock generated will be only a test double, it will not replicate the full behaviour of the system the Proto file corresponds to. If you wanted to also replicate partially the semantics of the messages consider doing a recording of the gRPC messages to create the mocks, that way you can see the sample data as well.
Take a look at this JS library which hopefully does what you need:
https://github.com/alenon/grpc-mock-server
Usage example:
private static readonly PROTO_PATH: string = __dirname + "example.proto";
private static readonly PKG_NAME: string = "com.alenon.example";
private static readonly SERVICE_NAME: string = "ExampleService";
...
const implementations = {
ex1: (call: any, callback: any) => {
const response: any =
new this.proto.ExampleResponse.constructor({msg: "the response message"});
callback(null, response);
},
};
this.server.addService(PROTO_PATH, PKG_NAME, SERVICE_NAME, implementations);
this.server.start();

How to get current channel domain in pipeline based on the current application

In pipeline, I need to get channel domain to which current application is assigned.
I get current ApplicationBO instance but I haven't been able to get channel domain from it (I tried inspecting it in debugger, but I can only get domain for the application but not for the channel).
This is how currently applications and channels are assigned:
Company organization:
Channel 1
App 1 <--- Get Channel1 if in this app
Channel 2
App 2 <--- Get Channel2 if in this app
Both applications share the common cartridge which contains pipeline in which I need to get current channel
There are two options:
Call pipeline DetermineRepositories-Channel which returns you a Repository object (that is the Channel). On the Repository use object path Repository:RepositoryDomain to get the Domain. I'm not sure how big the performance implication is though..
Use object path ApplicationBO:Extension("PersistentObjectBOExtension"):PersistentObject:Domain to get the owning domain of the application itself. That will always be the channel(Domain). Because that's where storefront applications are born.
In case you need to convert the Domain object to a Repository object you can use pipelet GetRepositoryByRepositoryDomain.

Simplest possible BAM Scenario

I’m trying to set up a very simple BAM scenario within BizTalk Server 2013R2 upon which to build, involving tracking just the time of all incoming messages processed by a port.
To this end I have :
Within Excel, created an Activity Definition (called
SimpleReceiveTest) containing a single Item called ReceiveTime of
type milestone (date/time), and a View Definition (also called
SimpleReceiveTest) containing just this Activity Definition and Item.
Imported this BAM definition spreadsheet using bm.exe
Added view rights to SimpleReceiveTest again using bm.exe
Launched the Tracking Profile Editor, imported the BAM Activity
Definition, and mapped ActivityID = MessageID and ReceiveTime =
PortStartTime by drag and drop from the Messaging Property Schema, as
shown below :
Set the Port Mappings for MessageID and PortStartTime to relate to a
test Receive Port ReceivePort1 that I am using for testing. This is
using a pass-through pipeline.
Saved and applied the above Tracking Profile
It is my understanding that for any messages received on port ReceivePort1 I should now get a tracking activity created. However this is not happening – there are no records in any of the BAM tables/views and no data is available within the BAM Portal.
I have tried restarting the hosts, and have verified that the TDDS_FailedTrackingData table is empty, there is nothing relevant in the event log, a Tracking host is running and the SQL Agent Jobs are running. I have also tried running these jobs manually.
Have I missed something, and am I correct in my expectation that this simple scenario should create tracked activities for any messages passing through the Receive Port? If so what can I try to further diagnose this?
Now fixed - it's actually a bug in vanilla BizTalk 2013R2 when using a standard pipeline that has been fixed in CU2.
FIX: BAM tracking doesn’t work when you use the XMLReceive or a custom pipeline in BizTalk Server

Biztalk WCF-webhttp (WCF web publishing wizard)

[I am new in biztalk trying to publish and consume servcie using webhttp (using Biztalk 2013, VS 2012)
getting following message and don't know want to do next to solve this issue.
*you have created a service.
To test this service, you will need to create a client and use it to call the service. you can do this using the svcutil.exe tool from the command line with the following syntax:
svcutil.exe "http://[host]/expwebhttpsampledesktop/service1.svc?singlews"*dl
"svcutil.exe" command it generates .cs, .wsdl, and metadata.xml files for me.
not sure what i am doing wrong here but trying to consume the service i made. and at the end of it i am getting following error
"Error consuming WCF service metadata. Message part missing element. Correct service description ""http://tempuri.org/" message type "service1_operation1_inputmessage"" part "Part" and return the wizard."]
thank you in advance
You need to create a client that will now consume the service. A client can be anything from a simple Console app, a BizTalk Send Port, another Web-Service or a Winforms/WPF app. The client will invoke your service (possibly passing parameters), you service will do its stuff and return a response back to the client.
There are a number of ways to create a client, however you might want to start with this tutorial from MSDN: http://msdn.microsoft.com/en-us/library/ms733133.aspx.
Alternatively, you might want to search for 'Add Service Reference Visual Studio 2012'. Adding a service reference creates the necessary libraries for your client to consume the service.
UPDATE: I found some relevant screenshots, so I thought I would add them....
To add a Service Reference, right-click on your Project and select 'Add Service Reference':
within the 'Add Service Reference' dialog, enter the address of the service (in your case http://[host]/expwebhttpsampledesktop/service1.svc) and click 'Go' for the wizard to auto-discover the service methods. Finally, update the service Namespace:
You will now be able to reference your service just like any other type within C# in order to invoke it.
HTH, Nick.

Resources