How to response with a success JSON format after completing a transaction in corda - corda

Hi everyone i am working on a project in which i need to send a response in JSON format to the CLI that the Transaction have completed let me give you an example.Consider that i have stated a flow Start ExampleFlow pojo: {iouValue: 7}, otherParty: "O=PartyB,L=London,C=GB" and the result will be Starting
Generating transaction based on new IOU.
Verifying contract constraints.
Signing transaction with our private key.
Gathering the counter party's signature.
Collecting signatures from counterparties.
Verifying collected signatures.
Obtaining notary signature and recording transaction.
Broadcasting transaction to participants
Done
Flow completed with result: SignedTransaction(id=F95406D901209BA77396C1A4D375585C6E051414EE22BE441FC02E5AE147A050)
but what i want is that their should be a JSON format result not all of it but something like this
{response: success }
i just want some success response in JSON format
i am using IOU project
thanks

You can achieve that by establishing an RPC connection with your node; call the flow, then return the JSON object.
There are a couple of approaches that you can follow, and I recommend that you go through the samples repository https://github.com/corda/samples to explore them:
Create a webserver (SpringBoot application) that server REST API's that call your flows and return a JSON object: https://github.com/corda/samples/tree/release-V4/spring-webserver
Create a simple Java app that establishes an RPC connection with your node and serves as a client to call a certain method/flow: https://github.com/corda/samples/blob/release-V4/cordapp-example/clients/src/main/java/com/example/server/JavaClientRpc.java
If you follow the webserver sample, you can add a method to your controller that does something like:
#GetMapping(value = "/my-api", produces = MediaType.APPLICATION_JSON_VALUE)
private ResponseEntity<YourObject> getSomething() {
// Some code that calls your flow and returns YourObject.
return ResponseEntity.ok().body(YourObject);
}

so i got the answer what u need to do is add this dependency in client build.gradle
cordaCompile "net.corda:corda-jackson:$corda_release_version"
after that you just need to implement this code snip
String json = "";
try {
ObjectMapper mapper = JacksonSupport.createNonRpcMapper();
json = mapper.writeValueAsString(results);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
return json;
result can be any datatype you want to convert to json

Related

Spring Cloud Stream Kafka : How to access recordMetadata of kafka headers after producing a message to a kafka topic

I want get the offset and partition information after i produce a message to kafka topic.
I read through spring cloud stream kafka binding document and found that that can be achieved by fecthing RECORD_METADATA kafka header.
From Spring documentation: (https://cloud.spring.io/spring-cloud-static/spring-cloud-stream-binder-kafka/3.0.0.RELEASE/reference/html/spring-cloud-stream-binder-kafka.html#kafka-producer-properties)
recordMetadataChannel
The bean name of a MessageChannel to which successful send results should be sent; the bean must exist in the application context. The message sent to the channel is the sent message (after conversion, if any) with an additional header KafkaHeaders.RECORD_METADATA. The header contains a RecordMetadata object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)
I have configured my output topic bean name as message channel in the property file
spring.cloud.stream.kafka.bindings.acknowledgement-out.producer.record-metadata-channel = acknowledgement-out
my customized interface and producer class as below:
public interface OutputAcknowledgement {
#Output("acknowledgement-out")
MessageChannel output();
}
Producer class:
#EnableBinding(OutputAcknowledgement.class)
public class AcknowledgementProducer {
#Autowired
OutputAcknowledgement outputAcknowledgement;
public Boolean produce(Acknowledgement acknowledgement) {
Message msg = MessageBuilder.withPayload(acknowledgement).build();
boolean val = outputAcknowledgement.output().send(msg);
RecordMetadata recordMetadata = msg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class);
Getting null for recordMetadata.
Please suggest whether my approach is correct?
You're getting null because it doesn't exist in that message object at the time you're accessing it. According to the docs the metadata is only provided after successful publish. See this answer on how to get record metadata by providing a handler/consumer for the record metadata channel.

Quarkus - Reactive file download

Using Quarkus, can somebody give an example on how the server and client side code using a reactive API to download a file over http looks?
So far I tried to return a Flux of nio ByteBuffers but it seems not to be supported:
#RegisterRestClient(baseUri = "http://some-page.com")
interface SomeService {
// same interface for client and server
#GET
#Produces(MediaType.APPLICATION_OCTET_STREAM)
#Path("/somePath")
fun downloadFile(): reactor.core.publisher.Flux<java.nio.ByteBuffer>
}
Trying to return a Flux on the server-side results in the following exception:
ERROR: RESTEASY002005: Failed executing GET /somePath
org.jboss.resteasy.core.NoMessageBodyWriterFoundFailure: Could not find MessageBodyWriter for response object of type: kotlinx.coroutines.reactor.FlowAsFlux of media type: application/octet-stream
at org.jboss.resteasy.core.ServerResponseWriter.lambda$writeNomapResponse$3(ServerResponseWriter.java:124)
at org.jboss.resteasy.core.interception.jaxrs.ContainerResponseContextImpl.filter(ContainerResponseContextImpl.java:403)
at org.jboss.resteasy.core.ServerResponseWriter.executeFilters(ServerResponseWriter.java:251)
...
Here is an example how to start reactive file download with smallrye mutiny. Main function is getFile
#GET
#Path("/f/{fileName}")
#Produces(MediaType.APPLICATION_OCTET_STREAM)
public Uni<Response> getFile(#PathParam String fileName) {
File nf = new File(fileName);
log.info("file:" + nf.exists());
ResponseBuilder response = Response.ok((Object) nf);
response.header("Content-Disposition", "attachment;filename=" + nf);
Uni<Response> re = Uni.createFrom().item(response.build());
return re;
}
You can test in your local with mvn quarkus:dev and go to this url to see what files are there http://localhost:8080/hello/list/test and after that you can call this url to start download http://localhost:8080/hello/f/reactive-file-download-dev.jar
I did not check about Flux(which looks like more spring then quarkus), feel free to share your thoughts. I am just learning and answering/sharing.
As of this commit, Quarkus has out-of-the-box support for AsyncFile. So, we can stream down a file by returning an AsyncFile instance.
For example, in a JAX-RS resource controller:
// we need a Vertx instance for accessing filesystem
#Inject
Vertx vertx;
#GET
#Path("/file-data-1")
#Produces(MediaType.TEXT_PLAIN)
public Uni<Response> streamDataFromFile1()
{
final OpenOptions openOptions = (new OpenOptions()).setCreate(false).setWrite(false);
Uni<AsyncFile> uni1 = vertx.fileSystem()
.open("/srv/texts/hello.txt", openOptions);
return uni1.onItem()
.transform(asyncFile -> Response.ok(asyncFile)
.header("Content-Disposition", "attachment;filename=\"Hello.txt\"")
.build());
}

exactly once delivery Is it possible through spring-cloud-stream-binder-kafka or spring-kafka which one to use

I am trying to achieve exactly once delivery using spring-cloud-stream-binder-kafka in a spring boot application.
The versions I am using are:
spring-cloud-stream-binder-kafka-core-1.2.1.RELEASE
spring-cloud-stream-binder-kafka-1.2.1.RELEASE
spring-cloud-stream-codec-1.2.2.RELEASE spring-kafka-1.1.6.RELEASE
spring-integration-kafka-2.1.0.RELEASE
spring-integration-core-4.3.10.RELEASE
zookeeper-3.4.8
Kafka version : 0.10.1.1
This is my configuration (cloud-config):
spring:
autoconfigure:
exclude: org.springframework.cloud.netflix.metrics.servo.ServoMetricsAutoConfiguration
kafka:
consumer:
enable-auto-commit: false
cloud:
stream:
kafka:
binder:
brokers: "${BROKER_HOST:xyz-aws.local:9092}"
headers:
- X-B3-TraceId
- X-B3-SpanId
- X-B3-Sampled
- X-B3-ParentSpanId
- X-Span-Name
- X-Process-Id
zkNodes: "${ZOOKEEPER_HOST:120.211.316.261:2181,120.211.317.252:2181}"
bindings:
feed_platform_events_input:
consumer:
autoCommitOffset: false
binders:
xyzkafka:
type: kafka
bindings:
feed_platform_events_input:
binder: xyzkafka
destination: platform-events
group: br-platform-events
I have two main classes:
FeedSink Interface:
package au.com.xyz.proxy.interfaces;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.messaging.MessageChannel;
public interface FeedSink {
String FEED_PLATFORM_EVENTS_INPUT = "feed_platform_events_input";
#Input(FeedSink.FEED_PLATFORM_EVENTS_INPUT)
MessageChannel feedlatformEventsInput();
}
EventConsumer
package au.com.xyz.proxy.consumer;
#Slf4j
#EnableBinding(FeedSink.class)
public class EventConsumer {
public static final String SUCCESS_MESSAGE =
"SEND-SUCCESS : Successfully sent message to platform.";
public static final String FAULT_MESSAGE = "SOAP-FAULT Code: {}, Description: {}";
public static final String CONNECT_ERROR_MESSAGE = "CONNECT-ERROR Error Details: {}";
public static final String EMPTY_NOTIFICATION_ERROR_MESSAGE =
"EMPTY-NOTIFICATION-ERROR Empty Event Received from platform";
#Autowired
private CapPointService service;
#StreamListener(FeedSink.FEED_PLATFORM_EVENTS_INPUT)
/**
* method associated with stream to process message.
*/
public void message(final #Payload EventNotification eventNotification,
final #Header(KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment acknowledgment) {
String caseMilestone = "UNKNOWN";
if (!ObjectUtils.isEmpty(eventNotification)) {
SysMessage sysMessage = processPayload(eventNotification);
caseMilestone = sysMessage.getCaseMilestone();
try {
ClientResponse response = service.sendPayload(sysMessage);
if (response.hasFault()) {
Fault faultDetails = response.getFaultDetails();
log.error(FAULT_MESSAGE, faultDetails.getCode(), faultDetails.getDescription());
} else {
log.info(SUCCESS_MESSAGE);
}
acknowledgment.acknowledge();
} catch (Exception e) {
log.error(CONNECT_ERROR_MESSAGE, e.getMessage());
}
} else {
log.error(EMPTY_NOTIFICATION_ERROR_MESSAGE);
acknowledgment.acknowledge();
}
}
private SysMessage processPayload(final EventNotification eventNotification) {
Gson gson = new Gson();
String jsonString = gson.toJson(eventNotification.getData());
log.info("Consumed message for platform events with payload : {} ", jsonString);
SysMessage sysMessage = gson.fromJson(jsonString, SysMessage.class);
return sysMessage;
}
}
I have set the autocommit property for Kafka and spring container as false.
if you see in the EventConsumer class I have used Acknowledge in cases where I service.sendPayload is successful and there are no Exceptions. And I want container to move the offset and poll for next records.
What I have observed is:
Scenario 1 - In case where the Exception is thrown and there are no new messages published on kafka. There is no retry to process the message and it seems there is no activity. Even if the underlying issue is resolved. The issue I am referring to is down stream server unavailability. Is there a way to retry the processing n times and then give up. Note this is retry of processing or repoll from the last committed offset. This is not about Kafka instance not available.
If I restart the service (EC2 instance) then the processing happens from the offset where the last successful Acknowledge was done.
Scenario 2 - In case where Exception happened and then a subsequent message is pushed to kafka. I see the new message is processed and the offset moved. It means I lost the message which was not acknowledged. So the question is if I have handled the Acknowledge. How do I control to read from last commit not just the latest message and process it. I am assuming there is internally a poll happening and it did not take into account or did not know about the last message not being acknowledged. I don't think there are multiple threads reading from kafka. I dont know how the #Input and #StreamListener annotations are controlled. I assume the thread is controlled by property consumer.concurrency which controls the thread and by default it is set to 1.
So I have done research and found a lot of links but unfortunately none of them answers my specific questions.
I looked at (https://github.com/spring-cloud/spring-cloud-stream/issues/575)
which has a comment from Marius (https://stackoverflow.com/users/809122/marius-bogoevici):
Do note that Kafka does not provide individual message acking, which
means that acknowledgment translates into updating the latest consumed
offset to the offset of the acked message (per topic/partition). That
means that if you're acking messages from the same topic partition out
of order, a message can 'ack' all the messages before it.
not sure if it is the issue with order when there is one thread.
Apologies for long post, but I wanted to provide enough information. The main thing is I am trying to avoid losing messages when consuming from kafka and I am trying to see if spring-cloud-stream-binder-kafka can do the job or I have to look at alternatives.
Update 6th July 2018
I saw this post https://github.com/spring-projects/spring-kafka/issues/431
Is this a better approach to my problem? I can try latest version of spring-kafka
#KafkaListener(id = "qux", topics = "annotated4", containerFactory = "kafkaManualAckListenerContainerFactory",
containerGroup = "quxGroup")
public void listen4(#Payload String foo, Acknowledgment ack, Consumer<?, ?> consumer) {
Will this help in controlling the offset to be set to where the last
successfully processed record? How can I do that from the listen
method. consumer.seekToEnd(); and then how will listen method reset to get the that record?
Does putting the Consumer in the signature provide support to get
handle to consumer? Or I need to do anything more?
Should I use Acknowledge or consumer.commitSyncy()
What is the significance of containerFactory. do I have to define it
as a bean.
Do I need #EnableKafka and #Configuration for above approach to work?
Bearing in mind the application is a Spring Boot application.
By Adding Consumer to listen method I don't need to implement
ConsumerAware Interface?
Last but not least, Is it possible to provide some example of above approach if it is feasible.
Update 12 July 2018
Thanks Gary (https://stackoverflow.com/users/1240763/gary-russell) for providing the tip of using maxAttempts. I have used that approach. And I am able to achieve exactly once delivery and preserve the order of the message.
My updated cloud-config:
spring:
autoconfigure:
exclude: org.springframework.cloud.netflix.metrics.servo.ServoMetricsAutoConfiguration
kafka:
consumer:
enable-auto-commit: false
cloud:
stream:
kafka:
binder:
brokers: "${BROKER_HOST:xyz-aws.local:9092}"
headers:
- X-B3-TraceId
- X-B3-SpanId
- X-B3-Sampled
- X-B3-ParentSpanId
- X-Span-Name
- X-Process-Id
zkNodes: "${ZOOKEEPER_HOST:120.211.316.261:2181,120.211.317.252:2181}"
bindings:
feed_platform_events_input:
consumer:
autoCommitOffset: false
binders:
xyzkafka:
type: kafka
bindings:
feed_platform_events_input:
binder: xyzkafka
destination: platform-events
group: br-platform-events
consumer:
maxAttempts: 2147483647
backOffInitialInterval: 1000
backOffMaxInterval: 300000
backOffMultiplier: 2.0
Event Consumer remains the same as my initial implementation. Except for rethrowing the error for the container to know the processing has failed. If you just catch it then there is no way container knows the message processing has failures. By doing acknoweldgement.acknowledge you are just controlling the offset commit. In order for retry to happen you must throw the exception. Don't forget to set the kafka client autocommit property and spring (container level) autocommitOffset property to false. Thats it.
As explained by Marius, Kafka only maintains an offset in the log. If you process the next message, and update the offset; the failed message is lost.
You can send the failed message to a dead-letter topic (set enableDlq to true).
Recent versions of Spring Kafka (2.1.x) have special error handlers ContainerStoppingErrorHandler which stops the container when an exception occurs and SeekToCurrentErrorHandler which will cause the failed message to be redelivered.

Realm: Notification after initial sync

According to the docs Realm can notify you when certain actions are taking place like "every time a write transaction is committed". I am using the Realm Object Server and the first time a user opens my app a large set of data is synched from the server down to the app. I would like to show a loading screen and not present the main UI of my app until Realm has completed its initial sync. Is there a way to be notified / determine when this process is complete?
The realm.io website just posted documentation on how to do this.
Asynchronously Opening Realms
If opening a Realm might require a time-consuming operation, such as applying migrations or downloading the remote contents of a synchronized Realm, you should use the openAsync API to perform all work needed to get the Realm to a usable state on a background thread before dispatching to the given queue. You should also use openAsync with Realms that are set read-only.
For example:
Realm.openAsync({
schema: [PersonSchema],
schemaVersion: 42,
migration: function(oldRealm, newRealm) {
// perform migration (see "Migrations" in docs)
}
}, (error, realm) => {
if (error) {
return;
}
// do things with the realm object returned by openAsync to the callback
console.log(realm);
})
The openAsync command takes a configuration object as its first parameter and a callback as its second; the callback function receives a boolean error flag and the opened Realm.
Initial Downloads
In some cases, you might not want to open a Realm until it has all remote data available. In such a case, use openAsync. When used with a synchronized Realm, this will download all of the Realm’s contents before the callback is invoked.
var carRealm;
Realm.openAsync({
schema: [CarSchema],
sync: {
user: user,
url: 'realm://object-server-url:9080/~/cars'
}
}, (error, realm) => {
if (error) {
return;
}
// Realm is now downloaded and ready for use
carRealm = realm;
});

Apache HTTP client 4.3 credentials per request

I have been having a look to a digest authentication example at:
http://hc.apache.org/httpcomponents-client-4.3.x/examples.html
In my scenario the there are several threads issuing HTTP requests and each of them has to be authenticated with their own set of credentials. Additionally, please consider this question is probably very specific for the Apache HTTP client 4.3 onwards, 4.2 handles authentication probably in a different way, although I didn't check it myself. That said, there goes the actual question.
I want to use just one client instance (static member of the class, that is threadsafe) and give it a connection manager to support several concurrent requests. The point is that each request will provide different credentials and I am not seeing the way to assign credentials per request as the credentials provider is set when building the http client. From the link above:
[...]
HttpHost targetHost = new HttpHost("localhost", 80, "http");
CredentialsProvider credsProvider = new BasicCredentialsProvider();
credsProvider.setCredentials(
new AuthScope(targetHost.getHostName(), targetHost.getPort()),
new UsernamePasswordCredentials("username", "password"));
CloseableHttpClient httpclient = HttpClients.custom()
.setDefaultCredentialsProvider(credsProvider).build();
[...]
Checking:
http://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html#d5e600
The code sample in point 4.4 (seek 4.4. HTTP authentication and execution context), seems to say that the HttpClientContext is given the auth cache and the credentials provider and then is passed to the HTTP request. Next to it the request is executed and it seems that the client will get credentials filtering by the host in the HTTP request. In other words: if the context (or the cache) has valid credentials for the target host of the current HTTP request, he will use them. The problem for me is that different threads will perform different requests to the same host.
Is there any way to provide custom credentials per HTTP request?
Thanks in advance for your time! :)
The problem for me is that different threads will perform different requests to the same host.
Why should this be a problem? As long as you use a different HttpContext instance per thread, execution contexts of those threads are going to be completely indepenent
CloseableHttpClient httpclient = HttpClients.createDefault();
CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials("user:pass"));
HttpClientContext localContext = HttpClientContext.create();
localContext.setCredentialsProvider(credentialsProvider);
HttpGet httpget = new HttpGet("http://localhost/");
CloseableHttpResponse response = httpclient.execute(httpget, localContext);
try {
EntityUtils.consume(response.getEntity());
} finally {
response.close();
}
I have a similar issue.
I must call n-times a service with a single system user, authenticated with NTLM. I want to do this using multiple threads.
What I came up with is creating a single HTTPClient with no default credential provider. When a request needs to be performed I use an injected CredentialProviderFactory into the method performing the request (in a specific thread). Using this I get a brand new CredentialsProvider and I put this into a Context (created in the thread).
Then I call the execute method on the client using the overload execute(method, context).
class MilestoneBarClient implements IMilestoneBarClient {
private static final Logger log = LoggerFactory.getLogger(MilestoneBarClient.class);
private MilestoneBarBuilder builder;
private CloseableHttpClient httpclient;
private MilestoneBarUriBuilder uriBuilder;
private ICredentialsProviderFactory credsProviderFactory;
MilestoneBarClient(CloseableHttpClient client, ICredentialsProviderFactory credsProviderFactory, MilestoneBarUriBuilder uriBuilder) {
this(client, credsProviderFactory, uriBuilder, new MilestoneBarBuilder());
}
MilestoneBarClient(CloseableHttpClient client, ICredentialsProviderFactory credsProviderFactory, MilestoneBarUriBuilder uriBuilder, MilestoneBarBuilder milestoneBarBuilder) {
this.credsProviderFactory = credsProviderFactory;
this.uriBuilder = uriBuilder;
this.builder = milestoneBarBuilder;
this.httpclient = client;
}
// This method is called by multiple threads
#Override
public MilestoneBar get(String npdNumber) {
log.debug("Asking milestone bar info for {}", npdNumber);
try {
String url = uriBuilder.getPathFor(npdNumber);
log.debug("Building request for URL {}", url);
HttpClientContext localContext = HttpClientContext.create();
localContext.setCredentialsProvider(credsProviderFactory.create());
HttpGet httpGet = new HttpGet(url);
long start = System.currentTimeMillis();
try(CloseableHttpResponse resp = httpclient.execute(httpGet, localContext)){
[...]
For some reasons I sometimes get an error, but I guess it's an NTLMCredentials issue (not being thread-safe...).
In your case, you could probably pass the factory to the get methods instead of passing in creation.

Resources