Why set up Serialization Context in Corda off node? - corda

Recently, when I wanted to sign something with a certificate outside node, I got the below exceptions:
Caused by: java.lang.IllegalStateException: Expected exactly 1 of
{nodeSerializationEnv, globalSerializationEnv,
contextSerializationEnv, inheritableContextSerializationEnv} but got:
{}
https://github.com/corda/corda/blob/671a9d232cf1f29dbce4432bc91096ffd098a91c/core/src/main/kotlin/net/corda/core/serialization/internal/SerializationEnvironment.kt#L91
On debugging, I found it's first serialised and then signed. So, I had to set up the serialisation context to get it serialised and signed.
I have a limited understanding of why is it required. I understand that different contexts are required for P2P and RPC calls, but I'm not entirely sure can someone please fill me in with some background.

The internal library you are using to sign the certificate requires the certificate to be serialised first. In turn, this requires you to specify a serialisation context. A serialisation contexts defines how serialisation is performed in various situations, such as P2P, client-side RPC, server-side RPC, storage and checkpointing.
Note that these serialisation contexts are set for you automatically when running a node or a suite of node tests. You are only encountering this issue because you are using an internal library outside the context where it is expected to be used.
In your case, you should probably use globalSerializationEnv, which is the serialisation environment used for mock nodes and nodes created using the node driver. nodeSerializationEnv is used by the node itself, and contextSerializationEnv and inheritableContextSerializationEnv are used for various platform tests.
For educational purposes, it can be helpful to look at how the node sets up its serialisation framwork when it starts up (see https://github.com/corda/corda/blob/release-V3/node/src/main/kotlin/net/corda/node/internal/Node.kt):
nodeSerializationEnv = SerializationEnvironmentImpl(
SerializationFactoryImpl().apply {
registerScheme(KryoServerSerializationScheme())
registerScheme(AMQPServerSerializationScheme(cordappLoader.cordapps))
},
p2pContext = AMQP_P2P_CONTEXT.withClassLoader(classloader),
rpcServerContext = KRYO_RPC_SERVER_CONTEXT.withClassLoader(classloader),
storageContext = AMQP_STORAGE_CONTEXT.withClassLoader(classloader),
checkpointContext = KRYO_CHECKPOINT_CONTEXT.withClassLoader(classloader))

Related

Can we use standalone Spring Cloud Schema Registry with Confluent's KafkaAvroSerializer?

I have a project using Spring cloud stream with Kafka Streams binder. For the output of a stream, I am using Avro, with the Serde provided by Confluent(io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde).
I am able to use it with the Confluent Schema Registry. Serialization and Deserialization takes place correctly.
However, I wanted to see if we can use the Spring Cloud Schema Registry Server instead of the Confluent one. I configured a standalone Schema Registry server and set the schema registry in my project to it (changed the schemaRegistryClient.endpoint and schema.registry.url properties).
When I tried it out, it seems Spring Cloud is able to work with the standalone server. It registers the schema available in the resources folder as a .avsc file. However, when I send a message, it seems the Confluent serializer continues to approach it as a Confluent Schema Registry (which has different REST endpoints from Spring Schema Registry). As a result, it gets a 405 response code.
We get the following exception(partial stack-trace)
org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: <my-avro-schema>
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Unexpected character ('<' (code 60)): expected a valid value (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (sun.net.www.protocol.http.HttpURLConnection$HttpInputStream); line: 1, column: 2]; error code: 50005
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:230)
It seems to me that there are two possibilities:
Spring Schema Registry Server can work only with the content-type provided by Spring (specified as content-type: application/*+avro) and not with the native Serde provided by Confluent, or
There is an issue with the project configuration.
Can someone help me figure out which one is it? If it is the second one, can someone point out what is wrong?
Each schema registry provider requires a proprietary SerDe library. For example, if you would like to integrate AWS Glue Schema Registry with Kafka, then you would need Amazon's SerDe stuff. Hence, the Confluent's SerDe library expects Confluent's Schema Registry at the address specified in the schema.registry.url property.

Confused about health checking protocol

I have read below doc, source code and issue:
https://github.com/grpc/grpc/blob/master/doc/health-checking.md
https://github.com/grpc/grpc-node/blob/master/packages/grpc-health-check/test/health_test.js
https://github.com/grpc/grpc/issues/10428
I provide an example and try to explain:
// Import package
let health = require('grpc-health-check');
// Define service status map. Key is the service name, value is the corresponding status.
// By convention, the empty string "" key represents that status of the entire server.
const statusMap = {
"ServiceFoo": proto.grpc.health.v1.HealthCheckResponse.ServingStatus.SERVING,
"ServiceBar": proto.grpc.health.v1.HealthCheckResponse.ServingStatus.NOT_SERVING,
"": proto.grpc.health.v1.HealthCheckResponse.ServingStatus.NOT_SERVING,
};
// Construct the service implementation
let healthImpl = new health.Implementation(statusMap);
// Add the service and implementation to your pre-existing gRPC-node server
server.addService(health.service, healthImpl);
I am not clear about the following points:
Does the service name in statusMap need to be the same as the service name in the protocol buffers file? Or the service name can be arbitrarily specified. If so, how does the service name map to the service defined in the protocol buffers?
From the health checking protocol:
The server should register all the services manually and set the individual status
Why do we need to register manually? If the service code can be generated, why doesn't grpc help us automatically register the service name in statusMap? (Imagine setting the status of 100 services one by one)
The service status is hard code and cannot be changed at application runtime. If my service is unavailable at runtime for some reason such as misconfiguration, downstream service is not available, but the status of the service is always serving(because it's hard code), if so, what is the meaning of the health check?
For RESTful API, we can provide a /health-check or /ping API to check that the entire server is running normally.
Regarding the service names, the first linked document says this:
The suggested format of service name is package_names.ServiceName, such as grpc.health.v1.Health.
This does correspond to the package names and service name defined in the Protobuf definition.
The services need to be registered "manually" because the status is determined at the application level, which the grpc library does not know about, and a registered service name is only meaningful along with the corresponding status. In addition, the naming format mentioned above is just a convention; the health check service user is not constrained to it, and the actual services on the server are not constrained to use the standard /package_names.ServiceName/MethodName method naming scheme either.
Regarding the third point, the service status should not be hardcoded, and can be changed at runtime. The HealthImplementation class used in the code in the question has a setStatus method that can be used to update the status.
Also, as mentioned in a comment in the code in the question,
By convention, the empty string "" key represents that status of the entire server.
That can be used as the equivalent of the /health-check or /ping REST APIs.

Mock real gRPC server responses

We have a microservice that needs to be integration tested (real calls, but no network communication with anything outside of the test namespace in kubernetes) in our pipeline. It also relies on an external gRPC server which we have no control over.
Above is a picture of what we'd like to have happen. The white box on the left is code that provides the Microservice Boundary with 'external' data. It then keeps calling the Code via REST until it gets back the proper number of records or it times out. The Code pulls records from an internal database, as well as data associated to those records from a gRPC call. Since we do not own the gRPC service, but are doing integration tests, we need a few pre-defined responses to the two gRPC services we call (blue box).
Since our integration tests are self-contained right now, and we don't want to write an entirely new actual gRPC server implementation just to mimick calls, is there a way to stand up a real gRPC server and configure it to return responses? The request is pretty much like a mock setup, except with an actual server.
We need to be able to:
give the server multiple proto files to interpret and have it expose those as endpoints. Proto files must be able to have different package names
using files we can store in source control, configure the responses to each call
able to run in a linux docker container (no windows)
I did find gripmock which seemed almost exactly what we need, but it only serves one proto file per container. It supposedly can serve more than one, but I can't get it to work and their example that serves two files implies each proto file must have the same package name which will likely never happen with our scenarios. In the meantime we are using it, but if we have 10 gRPC call dependencies, we now have to run 10 gripmock servers.
Wikipedia contains a list of API mocking tools. Looking at that list today there is a commercial tool that supports gRPC called Traffic Parrot which allows you to create gRPC mocks based on your Proto files. You can give it multiple proto files, store the mocks in Git and run the tool in Docker.
There are also open-source tools like GripMock but it does not generate stubs based on Proto files, you have to create them manually. Also, the project up to today was not keeping up to date with Proto and gRPC developments i.e. the package name issue you have discovered yourself above (works only if the package names in different proto files are the same). There are a few other open-source tools like grpc-wiremock, grpc-mock or bloomrpc-mock but they still lack widespread adoption and hence might be risky to adopt for an important enterprise project.
Keep in mind, the mock generated will be only a test double, it will not replicate the full behaviour of the system the Proto file corresponds to. If you wanted to also replicate partially the semantics of the messages consider doing a recording of the gRPC messages to create the mocks, that way you can see the sample data as well.
Take a look at this JS library which hopefully does what you need:
https://github.com/alenon/grpc-mock-server
Usage example:
private static readonly PROTO_PATH: string = __dirname + "example.proto";
private static readonly PKG_NAME: string = "com.alenon.example";
private static readonly SERVICE_NAME: string = "ExampleService";
...
const implementations = {
ex1: (call: any, callback: any) => {
const response: any =
new this.proto.ExampleResponse.constructor({msg: "the response message"});
callback(null, response);
},
};
this.server.addService(PROTO_PATH, PKG_NAME, SERVICE_NAME, implementations);
this.server.start();

How to create/simulate creation of deserialization exception when using schema registry ( in addition to brokers and zookeepers)

We are using spring-kafka-2.2.7.RELEASE to produce and consume avro messages and using schema registry for schema validation with 'FORWARD_TRANSITIVE' as the compatibility type. Now, I'm trying to use 'ErrorHandlingDeserializer2 ' from spring-kafka to handle the exception/error when a deserializer fails to deserialize a message. Now I'm trying to write a component test to test this configuration. My component test expected to have below steps.
Spin up a local kafka cluster using docker containers
Send an avro message (using KafkaTemplate) with invalid schema to re-create/simulate the deserialization exception onto a test topic
Now what's happening is, since we have schema registry in place, if i send a message with new schema (invalid schema) it's validating the schema as per the compatibility type setting we have and not letting me producer the message onto kafka by throwing an exception at the producer level itself.
Now my question is, In this scenario, how can I create/simulate the creation of deserialization exception to test my configuration. Please suggest.
Note:- I don't want to disable/stop schema registry because that wouldn't simulate our prod setup.

Flash Builder Localhost works 100% Remote Host just shows title of Object for every entry

I have finally gotten my Flash Builder to look at my remote services but now I have a problem that my Remote information, which should be the same except for alot more entries, just displays each object with the title [object Object] I have had a look around and I see if I test the service out locally, it is working as it calls all the information under Response Name 'object and Response Value 'Object'
On my localhost configuration this shows the name which is inside my Object items. How can I fix this?
[object Object] is the result of the toString() method of Object. If you get this it probably means your custom object type is being returned as a generic object from the remote AMF service. A lot of things could be the cause of this. Here are a few to check:
1) Make sure that your custom object type is compiled into the app. IF the object is never used explicitly the Flex compiler will not put it in the final SWF. You can do this by creating a fake variable:
private var myUnusedObject : MyCustomObjectType;
Or, I believe, there is a compiler flag to force unused classes to be compiled into the SWF.
2) You may have to add a formal mapping on your server. This depends primarily on what server side tech you're using. In AS3 you add a RemoteAlias metadata to the class. In ColdFusion you use the alias tag on the cfcomponent tag. I believe in WerbORB.NET I had to add the mapping in an XML Config file [but it's been years since I've done that]. I assume alternate technologies use similar approaches.
3) Check case sensitivity on the path names for your server code and make sure that the aliases (mentioned in 2) match.
4) In ColdFusion AMF you have to make sure that your public properties and types match up. They must be in the same order in your AS3 class as they are in your remote CFC. The property types must match. String to String; Boolean to Boolean, etc... I assume other AMF implementations have similar restrictions.

Resources