AccountInfo state has a field called status which is initialized with the value ACTIVE, but currently AccountInfoCommand class only has one command which is Create, so should we use that if we want to write a flow that deactivates an account (i.e. updates it, not creates it)? I don't feel that's right since there should be certain checks that are related to an update command (like there should be one input and one output with same linearId, etc...).
Is there a reason why RequestAccountFlow was designed to return an AccountInfo as opposed to StateAndRef? The latter makes it easier to request an AccountInfo; then use it as an input for a certain transaction (like in my case, I would request an account, get its StateAndRef, clone it with the new status, use the StateAndRef as input, and the clone with new status as output).
With the current Accounts implementation, the AccountInfo state no longer has the status state. https://github.com/corda/accounts/blob/master/contracts/src/main/kotlin/com/r3/corda/lib/accounts/contracts/states/AccountInfo.kt
And the RequestAccountFlow was coded in a way to utilizing ShareAccountInfoFlow (which returns an AccountInfo)
Related
I am working on building streaming APIs for client/server communication using Axon and ServerSentEvents and not sure if it is possible to stream and identify multiple different events using Axon query update emitter and subscription query.
I am using Axon QueryUpdateEmitter.emit to emit the events from a projection based on different events. Emitter is emitting in projection whereas subscription query is taking place in the REST API that is supposed to stream the server sent events to client.
For example,
I want to emit 3 different events for a use case which creates, updates and deletes an entity.
I am wondering if we can emit different types of data from different events but still combine in one stream, i.e. send actual object upon entity create and update in the emitter but, since I don’t have any entity/data to emit in case of delete, I thinking whether to send a simple message for delete?
I also want a way to specify the type of event while emitting so when ServerSentEvent is build from subscription query, I can specify the type/action (for ex, differentiate between create or update event) along with data.
Main idea is to emit different events and add them in one stream despite knowing all events may not return exactly same data (create, update vs. delete) as part of one subscription query and to be able to accurately identify the event and specify in the stream of ServerSentEvents with appropriate event type.
Any ideas on how I can achieve this?
Here's how I am emitting an event upon creation using QueryUpdateEmitter:
#EventHandler
public void on(LibraryCreatedEvent event, #Timestamp Instant timestamp) {
final LibrarySummaryEntity librarySummary = mapper.createdEventToLibrarySummaryEntity(event, timestamp);
repository.save(librarySummary);
log.debug("On {}: Saved the first summary of the library named {}", event.getClass().getSimpleName(), event.getName());
queryUpdateEmitter.emit(
AllLibrarySummariesQuery.class,
query -> true,
librarySummary
);
log.debug("emitted library summary: {}", librarySummary.getId());
}
Since I need to distinguish between create and update so I tried using GenericSubscriptionQueryUpdateMessage.asUpdateMessage upon update event and added some metadata along with it but not sure if that is in the right direction as I am not sure how to retrieve that information during subscription query.
Map<String, String> map = new HashMap();
map.put(“Book Updated”, event.getLibraryId());
queryUpdateEmitter.emit(AllLibrarySummariesQuery.class,query → true,GenericSubscriptionQueryUpdateMessage.asUpdateMessage(librarySummary).withMetaData(map));
Here's how I am creating subscription query:
SubscriptionQueryResult<List<LibrarySummaryEntity>, LibrarySummaryEntity> result = queryGateway.subscriptionQuery(new AllLibrarySummariesQuery(),ResponseTypes.multipleInstancesOf(LibrarySummaryEntity.class),ResponseTypes.instanceOf(LibrarySummaryEntity.class));
And the part where I am building server sent event:
(.event is where I want to specify the type of event - create/update/delete and send the applicable data accordingly)
Flux<ServerSentEvent<LibrarySummaryResponseDto>> sseStream = result.initialResult()
.flatMapMany(Flux::fromIterable).map(value -> mapper.libraryEntityToResponseDto(value))
.concatWith((streamingTimeout == -1)? result.updates().map(value -> mapper.libraryEntityToResponseDto(value)): result.updates().take(Duration.ofMinutes(streamingTimeout)).map(value -> mapper.libraryEntityToResponseDto(value)))
.log()
.map(created -> ServerSentEvent.<LibrarySummaryResponseDto>builder()
.id(created.getId())
.event("library creation")
.data(created).build())
.doOnComplete(() -> {log.info("streaming completed");})
.doFinally(signal -> result.close());
As long as the object you return matches the expected type when making the subscription query, you should be good!
Note that this means you will have to make a response object that can fit your scenarios. Whether response is something you'd emit as the update (through the QueryUpdateEmitter) or a map operation from where you return the subscription query, is a different question, though.
Ideally, you'd decouple your internal messages from what you send outward, like with SSE. To move to a more specific solution, you could benefit from having a Flux response type. You can simply attach any mapping operations to adjust the responses emitted by the QueryUpdateEmitter to your desired SSE format.
Concluding, the short answer is "yes you can," as long as the emitted response object matches the expected update type when dispatching the subscription query on the QueryGateway.
What would a Cosmos stored procedure look like that would set the PumperID field for every record to a default value?
We are needing to do this to repair some data, so the procedure would visit every record that has a PumperID field (not all docs have this), and set it to a default value.
Assuming a one-time data maintenance task, arguably the simplest solution is to create a single purpose .NET Core console app and use the SDK to query for the items that require changes, and perform the updates. I've used this approach to rename properties, for example. This works for any Cosmos database and doesn't require deploying any stored procs or otherwise.
Ideally, it is designed to be idempotent so it can be run multiple times if several passes are required to catch new data coming in. If the item count is large, one could optionally use the SDK operations to scale up throughput on start and scale back down when finished. For performance run it close to the endpoint on an Azure Virtual Machine or Function.
For scenarios where you want to iterate through every item in a container and update a property, the best means to accomplish this is to use the Change Feed Processor and run the operation in an Azure function or VM. See Change Feed Processor to learn more and examples to start with.
With Change Feed you will want to start it to read from the beginning of the container. To do this see Reading Change Feed from the beginning.
Then within your delegate you will read each item off the change feed, check it's value and then call ReplaceItemAsync() to write back if it needed to be updated.
static async Task HandleChangesAsync(IReadOnlyCollection<MyType> changes, CancellationToken cancellationToken)
{
Console.WriteLine("Started handling changes...");
foreach (MyType item in changes)
{
if(item.PumperID == null)
{
item.PumperID = "some value"
//call ReplaceItemAsync(), etc.
}
}
Console.WriteLine("Finished handling changes.");
}
I'm trying to understand how corda oracles work from an example on github. It seems like in every example oracle verification function checks the data in command and data in output state. I don't understand why that should work because we (issuer node) manage that data and put it in command/output state.
// Our contract does not check that the Nth prime is correct. Instead, it checks that the
// information in the command and state match.
override fun verify(tx: LedgerTransaction) = requireThat {
"There are no inputs" using (tx.inputs.isEmpty())
val output = tx.outputsOfType<PrimeState>().single()
val command = tx.commands.requireSingleCommand<Create>().value
"The prime in the output does not match the prime in the command." using
(command.n == output.n && command.nthPrime == output.nthPrime)
}
In this example state gets Nth prime number from oracle but after it's issued the verification function doesn't rerun generateNth prime function to make sure that this number is really the one we needed. I understand that data in this example is deterministic since Nth prime cannot change but what about the case where we have dynamic data like stock values? Shouldn't oracle verification function also send another http request and get current values to check them?
Firstly, note that contracts in Corda are not able to access the outside world in any way (DB reads, HTTP requests, etc.). If they could, transaction validity would be non-deterministic. A transaction that is found to be valid on day n may become invalid on day n+1 (because a database row changed, or a website went down, etc.). This would cause disagreements about whether a given transaction was a valid ledger update.
However, we sometimes need a transaction to include external data for verification (whether a company is bankrupt, whether a natural catastrophe happened, etc.). To do this, we use a trusted oracle that only signs the transaction if a given piece of data is valid.
We could embed the information in the input or output states. However, this would require us to reveal the entire input or output state to the oracle for signing. For privacy reasons, it is therefore preferable to embed the data in a command that only contains the data of interest to the oracle, so that we can filter out all the other parts of the transaction and only present this command to the oracle for signing.
The oracle will usually perform a DB read or make an HTTP request to check the validity of the data before signing.
I have a node on firebase that lists all the players in the game. This list will update as and when new players join. And when the current user ( me ) disconnects, I would like to remove myself from the list.
As the list will change over time, at the moment I disconnect, I would like to update this list and update firebase.
This is the way I am thinking of doing it, but it doesn't work as .update doesnt accept a function. Only the object. But if I create the object beforehand, when .onDisconnect calls, it will not be the latest object... How should I go about doing this?
payload.onDisconnect().update( () => {
const withoutMe = state.roomObj
const index = withoutMe.players.indexOf( state.userObj.name )
if ( index > -1 ) {
withoutMe.players.splice( index, 1 )
}
return withoutMe
})
The onDisconnect handler was made for this use-case. But it requires that the data of the write operation is known at the time that you set the onDisconnect. If you think about it, this should make sense: since the onDisconnect happens after your client is disconnected, the data of the data of that write operation must be known before the disconnect.
It sounds like you're building a so-called presence system: a list that contains a node for each user that is currently online. The Firebase documentation has an example of such a presence system. The key difference from your approach is that it in the documentation each user only modifies their own node.
So: when the user comes online, they write a node for themselves. And then when they get disconnected, that node gets removed. Since all users write their node under the same parent, that parent will reflect the users that are online.
The actual implementation is a bit more involved since it deals with some edge cases too. So I recommend you check out the code in the documentation I linked, and use that as the basis for your own similar system.
I want to make a purchase order manager, where a queue is created from a database and then assembled into an accordion. Then, the user can look at requests, and then check the request when it is done. The task will then move to a "completed purchases" list.
I've been using a "notPurchased" datastore with the following server script:
query.filters.purchased._equals = false;
return query.run();
And then when the "submit" button is pressed, I call datastore.load();. However, this doesn't seem to refresh the purchase queue immediately. I have to completely refresh the page in order to see purchase request moved to 'completed'. How do I make this change instantaneous?
I figured out a solution that reduced any lag. Instead of filtering the database with a query, I bound the 'visibility' property to the proper boolean flag. Now items move instantly!