How to get a transaction history by specific transaction ID(txhash) in Corda - corda

To get state I can use Vault, but what about transactions? How I can get them, for example, by txHash? Is it possible to do this by vaultService.queryBy(criteria) ?
Since internalVerifiedTransactionsSnapshot method is deprecated now, any ways to retrieve a specific transaction by using txhash as of Corda 4?

Inside of the node you can call:
serviceHub.validatedTransactions.getTransaction(hash)
Via rpc I think you can do this:
proxy.stateMachineRecordedTransactionMappingSnapshot().map { it.transactionId }.first { it == hash }
But a better solution would be to create a flow that takes in a hash, calls the first snippet and returns the transaction.

Related

Datastore: Saving entity with successors in the same transaction with autogenerated key Ids

I'd like to run the following algorithm (it's more like javascript pseudocode)
const transaction = datastore.transaction();
await transaction.run();
const parentKey = createKey(namespace, kind) // note that I leave the ID th be generated
await transaction.save(ancestorKey, parentEntity);
const childKey = createKey(namepsace, kind, parentId, parentKind) // ???
await transaction.save (ChildKey, childEntity);
await transaction.commit();
How can I know the parentId since the initial save of parentEntity is not yet commited?
I'd like to run this into a single transaction, is this achievable?
No, this is not possible due to the datastore's transaction isolation and consistency (emphasis mine):
This consistent snapshot view also extends to reads after writes
inside transactions. Unlike with most databases, queries and gets
inside a Cloud Datastore transaction do not see the results of
previous writes inside that transaction. Specifically, if an entity is
modified or deleted within a transaction, a query or lookup returns
the original version of the entity as of the beginning of the
transaction, or nothing if the entity did not exist then.
Depending on why you actually need such sequence to be done transactionally you might be able to achieve something somehow equivalent this way:
create the parent transactionally
in the same transaction also create and transactionally enqueue a push task queue passing it the parent's key as parameter - the task will be enqueued only if/when the transaction succeeds
in the task handler (also made transactional) create the child entity - guaranteed to only happen once
Note that not all GAE environments support such scheme due to limited push task queue support.

Does Speedment support transactions?

I have implemented the persistence layer using Speedment and I would like to
test the code using spring boot unit tests. I have annotated my unit tests with the following annotations:
#RunWith(SpringRunner.class)
#SpringBootTest
#Transactional
public class MovieServiceTest {
...
}
By default, Spring will start a new transaction surrounding each test method and #Before/#After callbacks, performing a roll back of the transaction at the end. With Speedment however this does not seem to work.
Does Speedment support transactions across several invocations, and if yes, how do I have to configure Spring to use the Speedment transactions or how doe I have to configure Speedment to use the data source provided by Spring?
Transaction support was added in Speedment 3.0.17. However, it does not integrate with the Spring #Transactional-annotation yet so you will have to wrap the code you want to execute as a single transaction like shown here:
txHandler.createAndAccept(tx ->
Account sender = accounts.stream()
.filter(Account.ID.equal(1))
.findAny()
.get();
Account receiver = accounts.stream()
.filter(Account.ID.equal(2))
.findAny()
.get();
accounts.update(sender.setBalance(sender.getBalance() - 100));
accounts.update(receiver.setBalance(receiver.getBalance() + 100));
tx.commit();
}
It is likely that you are streaming over a table and then conducts an update/remove operation while the stream is still open. Most database cannot handle having an open ResultSet on a Connection and then perform update operations on the same connection.
Luckily, there is an easy work around: consider collecting the entities you would like to modify in an intermediate Collection (such as a List or Set) and then use that Collection to perform the desired operations.
This case is described in the Speedment User's Guide here
txHandler.createAndAccept(
tx -> {
// Collect to a list before performing actions
List<Language> toDelete = languages.stream()
.filter(Language.LANGUAGE_ID.notEqual((short) 1))
.collect(toList());
// Do the actual actions
toDelete.forEach(languages.remover());
tx.commit();
}
);
AFAIK it does not (yet) - correction: it seems to setup one transaction per stream / statement.
See this article: https://dzone.com/articles/best-java-orm-frameworks-for-postgresql
But it should be possible to implement with writing a custom extension: https://github.com/speedment/speedment/wiki/Tutorial:-Writing-your-own-extensions
Edit:
According to a speedment developer one stream maps to one transaction: https://www.slideshare.net/Hazelcast/webinar-20150305-speedment-2

How to get Contract State from vaultService of MockNode in Corda M12.1?

I have created the MockNetwork and MockNodes for testing the CorDapp.
Then I successfully executed the Flows with State. It help me to store states on ledger.
I'm able to fetch previously stored state using :
mockNode1.rpcOps.vaultAndUpdates().first
.filterStatesOfType<SsiState>()
But unable to fetch same states using vaultService of mockNode1:
mockNode1.services.vaultService.track().first.states
or
mockNode1.vault.track().first.states
what could be the cause?
The solution would be to rebase to Corda M13. In M12.1, the new vault query interface (query(), track()) was only partially implemented, hence why it is not behaving as expected.
Alternatively, if you wish to remain on M12.1 you can use mockNode1.services.vaultService.states() instead. It is worth noting that this method will be deprecated going forward in favour of the new interface which you initially tried to use and which is defined here: https://docs.corda.net/api-vault.html

Oracle coherence: is there a way to force the invocation of an agent on a specific node?

I have a replicated cluster composed by several nodes (up to 30) on which there is a single JAVA process accessing to the coherence cache and I use the map.invoke(key, agent) method for both creation and update of agents. The creation and the update are performed setting the value in the process method.
Example (agent is instance of a ConcreteEntryProcessor implementing EntryProcessor interface):
map.invoke(key, agent);
Which invoke the following code of agent object:
public Object process(Entry entry) {
if (entry.isPresent())
{
//UPDATE
... some stuff which compute the new entry value...
entry.setValue(newValue, true);
return newValue
}
else
{
//CREATION
..other stuff to determine the value...
entry.setValue(value, true);
return value;
}
}
I noticed that if the update is made by the node that created the agent I have good performances, otherwise I have a performance decrease if the update is made from a different node. It seems that there is a kind of ownership of data.
Is there a way to force the execution of an agent on the local node or change the ownership of data?
It all depends on cache configuration. If you use distributed (partitioned) cache, then indeed there is some kind of data ownership. In that case entry processor is invoked on a node that owns given key.
And according to your performance issues, I see two possibilities:
Performance of map.invoke(key, agent) decreases, but performance of EntryProcessor.process(entry) is stable. In that case your performance issues are probably caused by serialization and network traffic needed to send back the result of processing to the node that called map.invoke(key, agent). If you don't need this value on that node, then simply return null from your entry processor.
Performance of EntryProcessor.process(entry) decreases. In that case mayby your create/update logic need some data from the node that called map.invoke(key, agent). So it is again serialization/network traffic issue, but without knowing the details of your particular logic it is hard to find a solution to your issue.

best practices for using sqlite for a database queue

I am using an sqlite database for a producer-consumer queue.
One or more producers INSERT one row at a time with a new autoincremented primary key.
There is one consumer (implemented in java, uses the sqlite-jdbc library) and I want it to read a batch of rows and delete them. It seems like I need transactions to do this but trying to use SQLite with transactions seems to not work right. Am I overthinking this?
If I do end up needing transactions, what's the right way to do this in Java?
Connection conn;
// assign here
boolean success = false;
try {
// do stuff
success = true;
}
finally
{
if (success)
conn.commit();
else
conn.rollback();
}
See this trail for an introduction on transaction handling with Java JDBC.
As for your use case, I think you should use transactions, especially if the consumer is complex. The tricky part is always to decide when a row has been consumed and when it should be considered again. For example, if you have an error before the consumer can actually do its job, you'll want a rollback. But if the row contains illegal data (like a text in a number field), then the rollback will turn into an infinite loop.
Normally, with SQLite there are explicit (not implicit!) transactions. So you need something like "START TRANSACTION" of course, it could be that your Java binding has this incorporated -- but good bindings don't.
So you might want to add the necessary transaction start (there might be a specialiced method in your binding).

Resources