GRPC END Contract=IPubSubPartitionManager Action=GetOrAddContextSelectorPartition ID=71eff709-3920-4139-a5c9-2b0bef6f5ba7 From=ipv4:10.0.0.56:35091 IsFault=True Duration(ms)=4932 Request { "contextSelector": "/debug/fabric--madari-madariservice-01-GrpcPublishSubscribeProber-8", "addIfNotExists": true, "clientCredential": { "credentialValue": "uswestcentral-prod.sdnpubsub.core.windows.net-client", "credentialRegex": { "pattern": "null string" }, "enablePropertyBasedAcls": false } } Response { "errorMsg": "Timed out waiting for Shared lock on key; id=49b61cd7-31ba-4c5d-b579-d068326e8a90#133028293161628388#urn:ContextSelectorMapping/dataStore#132077756302731635, timeout=4000ms, txn=133029757743026569, lockResourceNameHash=6572262935404555983; oldest txn with lock=133029757735370545 (mode Shared)\r\n" }
This problem mostly affects read operations that support Repeatable Read, the user may request an Update lock rather than a Shared lock. Update lock is an asymmetric lock is used to prevent but when a several transactions potentially updates at that time deadlock occurs.
Try to avoid TimeSpan.MaxValue for time-outs. It may detect deadlocks.
Don't create a transaction within another transaction's using statement. For example: the two transactions (T1 and T2) are attempting to read from and update K1, respectively. Due to the fact that they both end up with the Shared lock possible for them to deadlock. In this situation, one or both of the operations will time out.
To avoid a frequent type of deadlock that arises,
The default transaction timeout should be increased. Usually, it takes 4 second try to use in a different value
Make sure the transactions are short-lived; if they last any longer than necessary, you'll be blocking other tasks in the queue for a longer period of time than necessary.
For Reference: Azure Service Fabric
Related
Recently I came across a problem when it seems that when the flow ends on initiator's node, and immediately after that I query the output state of the transaction in vaults of all transaction participant nodes, it turns out that the state is present only on initiator's node, and only after a while it appears in vaults of other participants nodes.
Reading the documentations here "Send the transaction to the counterparty for recording", and it does not say that it will wait until counterparty will successfully record the transaction and its states to their vault, which sort of confirms that the issue that I am facing is actually the way Corda is implemented and not a bug or so.
On the other hand, it seems not that logical to end a flow without being sure that on the counterparty node everything was finished successfully and all the states are written in their vaults.
And also I looked into the code, and reached to this method in ServiceHubInternal, which seems to be responsible for recording states into vault.
fun recordTransactions(statesToRecord: StatesToRecord,
txs: Collection<SignedTransaction>,
validatedTransactions: WritableTransactionStorage,
stateMachineRecordedTransactionMapping: StateMachineRecordedTransactionMappingStorage,
vaultService: VaultServiceInternal,
database: CordaPersistence) {
database.transaction {
require(txs.isNotEmpty()) { "No transactions passed in for recording" }
val orderedTxs = topologicalSort(txs)
val (recordedTransactions, previouslySeenTxs) = if (statesToRecord != StatesToRecord.ALL_VISIBLE) {
orderedTxs.filter(validatedTransactions::addTransaction) to emptyList()
} else {
orderedTxs.partition(validatedTransactions::addTransaction)
}
val stateMachineRunId = FlowStateMachineImpl.currentStateMachine()?.id
if (stateMachineRunId != null) {
recordedTransactions.forEach {
stateMachineRecordedTransactionMapping.addMapping(stateMachineRunId, it.id)
}
} else {
log.warn("Transactions recorded from outside of a state machine")
}
vaultService.notifyAll(statesToRecord, recordedTransactions.map { it.coreTransaction }, previouslySeenTxs.map { it.coreTransaction })
}
}
And it does not seem to me that this method is doing something async, so I am really confused.
And so the actual question is:
Does Initiator flow in Corda actually waits until all the relevant states are recorded in vaults of all participant nodes before it finishes, or it finishes right after it sends the states to participant nodes for recording without waiting for a confirmation from their side that states were recorded?
Edited
So in case that Corda by default does not wait for counterparty flows to store states in their vaults, but my implementation needs this behavior anyways, would it be good solution to implement the following:
At the very end of Initiator flow, before returning from method call receiveAll in order to suspend and wait, then in the very end of the receiver flow, before returning from method do a vault query with trackBy to wait until the state of an interest is recorded in the vault, and whenever it is, call sendAll to notify the initiator's receiveAll method. And finish the initiator's flow only when it has received confirmation from all receivers..
Would it be a normal approach to solve this problem? can it have any drawbacks or side effects that you can think of?
Corda is able to handle your scenario, it is actually explained here under the Error handling behaviour section; below is an excerpt, but I recommend reading the full section:
To recover from this scenario, the receiver’s finality handler will automatically be sent to the Flow Hospital where it’s suspended and retried from its last checkpoint upon node restart
The Initiator flow is not responsible for the storage of the states in the Responder's vault. So, there is no storage confirmation from the Responder, since it has already checked the transaction and provided signatures. So, from the Initiator's point of view it's all good once the Transaction has been notarised and stored on its side, it is up to the Responder to manage errors in its storage phase, as mentioned in the previous comment.
Background: We have been getting ProducerFencedException in our producer-only transactions, and want to introduce uniqueness to our prefix to prevent this issue.
In this discussion, Gary mentions that in the case of read-process-write, the prefix must be the same in all instances and after each restart.
How to choose Kafka transaction id for several applications, hosted in Kubernetes?
While digging into this issue, I came to the realisation that we are sharing the same prefixId for both producer-only and read-process-write.
In our TopicPublisher class wrapping kafkaTemplate, we already have a publish() and publishInTransaction() methods for read-process-write and producer-only use cases respectively.
I am thinking to have 2 sets of kafkaTemplates/TransactionManagers/ProducerFactories, one with a fixed prefixId to be used by the publish() method and one with a unique prefix to be used in publishInTransaction().
My question is:
Does the prefix for producer-only need to be the same after a pod is restarted. Can we just append some uuid or k8s podId? Someone mentioned there may be delays with aborting transactions.
Is there a clean way to detect if the TopicPublisher is being called from a KafkaListener, so we can have just 1 publish method that uses the correct kafkaTemplate as needed?
Actually, there is no issue using the same transactionIdPrefix, at least with recent versions.
The factory gets a txIdPrefix.
For read-process-write, we create (and cache) a producer with transactionalId:
private String zombieFenceTxIdSuffix(String topic, int partition) {
return this.consumerGroupId + "." + topic + "." + partition;
}
which is suffixed onto the prefix.
For producer only-transactions, we create (and cache) a producer with the prefix and simple numeric suffix.
In the upcoming 2.3 release, there is also an option to assign a producer to a thread so the same thread always uses the same transactional.id.
I believe it needs to be the same, unless you don't mind waiting for transaction.timeout.ms (default 1 minute).
The maximum amount of time in ms that the transaction coordinator will wait for a transaction status update from the producer before proactively aborting the ongoing transaction.If this value is larger than the transaction.max.timeout.ms setting in the broker, the request will fail with a InvalidTransactionTimeout error.
This is what we do in spring-integration-kafka
if (this.transactional
&& TransactionSynchronizationManager.getResource(this.kafkaTemplate.getProducerFactory()) == null) {
sendFuture = this.kafkaTemplate.executeInTransaction(t -> {
return t.send(producerRecord);
});
}
else {
sendFuture = this.kafkaTemplate.send(producerRecord);
}
You can also use String suffix = TransactionSupport.getTransactionIdSuffix(); which is what the factory uses when it is asked for producer - if null, you are not running on a transactional consumer thread.
I've spent a fair amount of time looking into the Realm database mechanics and I can't figure out if Realm is using row level read locks under the hood for data selected during write transactions.
As a basic example, imagine the following "queue" logic
assume the queue has an arbitrary number of jobs (we'll say 5 jobs)
async getNextJob() {
let nextJob = null;
this.realm.write(() => {
let jobs = this.realm.objects('Job')
.filtered('active == FALSE')
.sorted([['priority', true], ['created', false]]);
if (jobs.length) {
nextJob = jobs[0];
nextJob.active = true;
}
});
return nextJob;
}
If I call getNextJob() 2 times concurrently, if row level read blocking isn't occurring, there's a chance that nextJob will return the same job object when we query for jobs.
Furthermore, if I have outside logic that relies on up-to-date data in read logic (ie job.active == false when it actually is true at current time) I need the read to block until update transactions complete. MVCC reads getting stale data do not work in this situation.
If read locks are being set in write transactions, I could make sure I'm always reading the latest data like so
let active = null;
this.realm.write(() => {
const job = this.realm.pseudoQueryToGetJobByPrimaryKey();
active = job.active;
});
// Assuming the above write transaction blocked the read until
// any concurrent updates touching the same job committed
// the value for active can be trusted at this point in time.
if (active === false) {
// code to start job here
}
So basically, TL;DR does Realm support SELECT FOR UPDATE?
Postgresql
https://www.postgresql.org/docs/9.1/static/explicit-locking.html
MySql
https://dev.mysql.com/doc/refman/5.7/en/innodb-locking-reads.html
So basically, TL;DR does Realm support SELECT FOR UPDATE?
Well if I understand the question correctly, the answer is slightly trickier than that.
If there is no Realm Object Server involved, then realm.write(() => disallows any other writes at the same time, and updates the Realm to its latest version when the transaction is opened.
If there is Realm Object Server involved, then I think this still stands locally, but the Realm Sync manages the updates from remote, in which case the conflict resolution rules apply for remote data changes.
Realm does not allow concurrent writes. There is at most one ongoing
write transaction at any point in time.
If the async getNextJob() function is called twice concurrently, one of
the invocations will block on realm.write().
SELECT FOR UPDATE then works trivially, since there are no concurrent updates.
I have a replicated cluster composed by several nodes (up to 30) on which there is a single JAVA process accessing to the coherence cache and I use the map.invoke(key, agent) method for both creation and update of agents. The creation and the update are performed setting the value in the process method.
Example (agent is instance of a ConcreteEntryProcessor implementing EntryProcessor interface):
map.invoke(key, agent);
Which invoke the following code of agent object:
public Object process(Entry entry) {
if (entry.isPresent())
{
//UPDATE
... some stuff which compute the new entry value...
entry.setValue(newValue, true);
return newValue
}
else
{
//CREATION
..other stuff to determine the value...
entry.setValue(value, true);
return value;
}
}
I noticed that if the update is made by the node that created the agent I have good performances, otherwise I have a performance decrease if the update is made from a different node. It seems that there is a kind of ownership of data.
Is there a way to force the execution of an agent on the local node or change the ownership of data?
It all depends on cache configuration. If you use distributed (partitioned) cache, then indeed there is some kind of data ownership. In that case entry processor is invoked on a node that owns given key.
And according to your performance issues, I see two possibilities:
Performance of map.invoke(key, agent) decreases, but performance of EntryProcessor.process(entry) is stable. In that case your performance issues are probably caused by serialization and network traffic needed to send back the result of processing to the node that called map.invoke(key, agent). If you don't need this value on that node, then simply return null from your entry processor.
Performance of EntryProcessor.process(entry) decreases. In that case mayby your create/update logic need some data from the node that called map.invoke(key, agent). So it is again serialization/network traffic issue, but without knowing the details of your particular logic it is hard to find a solution to your issue.
I'm having problems with a cache cluster to empty all cache data stores.
This cluster has 89 cache stores and lasts more than 40 minutes to completely unload data.
I'm using this function:
public void deleteAll() {
try {
getCache().clear();
} catch (Exception e) {
log.error("Error unloading cache",getCache().getCacheName());
}
}
getCache Method retrieves a NamedCache of CacheFactory.
public NamedCache getCache() {
if (cache == null) {
cache = com.tangosol.net.CacheFactory.getCache(this.idCacheFisica);
}
return cache;
}
Has anyone found any other way to do this faster?
Thank you in advance,
It's strange it would take so long, though to be honest, it's unusual to call clear.
You could try destroying the cache with NamedCache.destroy or CacheFactory.destroy(NamedCache).
The only problem with this is that it invalidates any references that there might be to the cache, which would need to be re-obtained with another call the CacheFactory.getCache.
Often, a "bulk operation" call like clear() will get transmitted all the way back to the backing maps as a bulk operation, but will at some point (somewhere inside the server code) become a iteration-based implementation, so that each entry being removed can be evaluated against any registered rules, triggers, event listeners, etc., and so that backups can be captured exactly. In other words, to guarantee "correctness" in a distributed environment, it slows down the operation and forces it to occur as some ordered sequence of sub-operations.
Some of the reasons why this would happen:
You've configured read-through, write-through and/or write-behind behavior for the cache;
You have near caches and/or continuous queries (which imply listeners);
You have indexes (which need to be updated as the data disappears);
etc.
I will ask for some suggestions on how to speed this up.
Update (14 July 2014) - Yes, it turns out that this is a known issue (i.e. that it could be optimized), but that to maintain both "correctness" and backwards compatibility, the optimizations (which would be significant changes) have been deferred, and the clear() is still done in an iterative manner on the back end.