I'm unable to create an index. My Gremlin code is as follows:
usernameProperty = mgmt.getPropertyKey('username')
usernameIndex = mgmt.buildIndex('byUsernameUnique', Vertex.class).addKey(usernameProperty).unique().buildCompositeIndex()
mgmt.setConsistency(usernameIndex, ConsistencyModifier.LOCK)
mgmt.commit()
Shortly after I receive two errors:
18:04:57 ERROR com.thinkaurelius.titan.graphdb.database.management.ManagementLogger - Evicted [1#0a00009d2537-ip-10-0-0-1572] from cache but waiting too long for transactions to close. Stale transaction alert on: [standardtitantx[0x6549ce71]]
18:04:57 ERROR com.thinkaurelius.titan.graphdb.database.management.ManagementLogger - Evicted [1#0a00009d2537-ip-10-0-0-1572] from cache but waiting too long for transactions to close. Stale transaction alert on: [standardtitantx[0x2a2815cc], standardtitantx[0x025dc2c0]]
The status of the index is stuck at INSTALLED:
usernameIndex.getIndexStatus(usernameProperty)
==>INSTALLED
I read that a failed instance could cause the issue, but a check of running instances shows just one:
mgmt.getOpenInstances()
==>0a00009d3011-ip-10-0-0-1572(current)
I've also tried issuing a REGISTER_INDEX action, which also gets evicted from the transaction cache with a similar error message:
mgmt.updateIndex(usernameIndex, SchemaAction.REGISTER_INDEX).get()
mgmt.commit()
I've also tried restarting the server multiple times.
It seems like the registration process is simply timing out, causing an "eviction" from the transaction cache. I've waited 48 hours just to be sure it wasn't a slow process. Normal reads, writes, and associated commits to Titan do seem to be working correctly, I'm just not able to create this index. I'm stuck, is there something else I can try? Is there a way to extend the timeout on that transaction?
I'm running Titan 1.0.0 using a DynamoDB backend (setup with the AWS provided CloudFormation template).
EDIT:
Here is the complete command I'm pasting into Gremlin with the addition of the awaitGraphStatus step suggested by #M-T-A:
mgmt = graph.openManagement();
usernameIndex = mgmt.getPropertyKey('usernameIndex');
mgmt.buildIndex('byUsername',Vertex.class).addKey(usernameIndex).unique().buildCompositeIndex();
// I have tried with and without a commit here: mgmt.commit();
mgmt.awaitGraphIndexStatus(graph, 'byUsername').status(SchemaStatus.REGISTERED).timeout(10, java.time.temporal.ChronoUnit.MINUTES).call();
This results in the following error:
java.lang.NullPointerException
at com.thinkaurelius.titan.graphdb.database.management.GraphIndexStatusWatcher.call(GraphIndexStatusWatcher.java:52)
at com.thinkaurelius.titan.graphdb.database.management.GraphIndexStatusWatcher.call(GraphIndexStatusWatcher.java:18)
at java_util_concurrent_Callable$call.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:110)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:114)
at groovysh_evaluate.run(groovysh_evaluate:3)
at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:215)
at org.codehaus.groovy.tools.shell.Interpreter.evaluate(Interpreter.groovy:69)
at org.codehaus.groovy.tools.shell.Groovysh.execute(Groovysh.groovy:185)
at org.codehaus.groovy.tools.shell.Shell.leftShift(Shell.groovy:119)
at org.codehaus.groovy.tools.shell.ShellRunner.work(ShellRunner.groovy:94)
I will also note that the routine to disable and delete an index is also failing.
mgmt = graph.openManagement()
theIndex = mgmt.getGraphIndex('byUsername')
mgmt.updateIndex(theIndex, SchemaAction.DISABLE_INDEX).get()
mgmt.commit()
mgmt.awaitGraphIndexStatus(graph, 'byUsername').status(SchemaStatus.DISABLED).timeout(10, java.time.temporal.ChronoUnit.MINUTES).call();
m = graph.openManagement()
i = m.getGraphIndex('byUsername')
m.updateIndex(i, SchemaAction.REMOVE_INDEX).get()
m.commit()
19:26:26 ERROR com.thinkaurelius.titan.graphdb.database.management.ManagementLogger - Evicted [1#ac1f3fa810472-ip-172-31-63-1681] from cache but waiting too long for transactions to close. Stale transaction alert on: [standardtitantx[0x2314cd97], standardtitantx[0x39f8adc0], standardtitantx[0x09de1b85]]
EDIT 2:
Attempting to create a new property key and index within the same transaction DOES WORK! But does this mean that I can't create indexes on existing property keys??
graph.tx().rollback();
mgmt = graph.openManagement();
indexName = 'byUsernameTest2';
propertyKeyName = 'testPropertyName2';
propertyKey = mgmt.makePropertyKey(propertyKeyName).dataType(String.class).cardinality(Cardinality.SINGLE).make();
mgmt.buildIndex(indexName,Vertex.class).addKey(propertyKey).buildCompositeIndex();
mgmt.commit();
graph.tx().commit();
mgmt.awaitGraphIndexStatus(graph, indexName).status(SchemaStatus.REGISTERED).timeout(10, java.time.temporal.ChronoUnit.MINUTES).call();
mgmt.commit();
After a pause, this results in:
This management system instance has been closed
Trying to fetch the new index results in:
mgmt = graph.openManagement();
index = mgmt.getGraphIndex('byUsernameTest2');
propkey = mgmt.getPropertyKey('testPropertyName2');
index.getIndexStatus(propkey);
==>ENABLED
If you're running multiple Titan instances, you should be aware that they need to coordinate before the index will become available.
Furthermore, there are various subtleties around transaction management and under what circumstances transactions will be left open; I believe Titan 1.0.0 doesn't have the latest from TinkerPop in that regard. Have you tried creating the index immediately after boot?
Finally, the index creation process is different depending on whether or not the property keys being indexed have been used before. Are you attempting to index new keys, or existing ones?
You need to wait for the Titan and DynamoDB to get the index registered. You could do so by:
ManagementSystem.awaitGraphIndexStatus(graph, propertyKeyIndexName)
.status(SchemaStatus.REGISTERED)
.timeout(10, java.time.temporal.ChronoUnit.MINUTES) // set timeout to 10 min
.call();
The default timeout is usually not long enough, so you could increase it to 10 minutes, it usually does the trick with Dynamo backened.
Only when the index is in REGISTERED state, you could perform a reindex. ONce the reindex is done, you need to wait until it's ENABLED. by reusing the code sample above and changing the state to ENABLED.
For more info, see the docs.
Edit
Let me share the code that works with me on Berkeley and Dynamo DB backends all the time.
graph.tx().rollback(); //Never create new indexes while a transaction is active
TitanManagement mgmt=graph.openManagement();
PropertyKey propertyKey=getOrCreatePropertyKeyIfNotExist(mgmt, propertyKeyName);
String indexName = makePropertyKeyIndexName(propertyKey);
if (mgmt.getGraphIndex(indexName)==null) {
mgmt.buildIndex(indexName, Vertex.class).addKey(propertyKey).buildCompositeIndex();
mgmt.commit(); // you MUST commit mgmt
graph.tx().commit(); // and commit the transaction too
ManagementSystem.awaitGraphIndexStatus(graph, indexName).status(SchemaStatus.REGISTERED).call();
}else { // already defined.
mgmt.rollback();
graph.tx().rollback();
}
private static PropertyKey getOrCreatePropertyKeyIfNotExist(TitanManagement mgmt, String s) {
PropertyKey key = mgmt.getPropertyKey(s);
if (key != null)
return key;
else
return mgmt.makePropertyKey(s).dataType(String.class).make();
}
private static String makePropertyKeyIndexName(PropertyKey pk) {
return pk.name() + Tokens.INDEX_SUFFIX;
}
From the error that I saw, it seems like Titan couldn't get the index which means you're waiting for the index that is not even defined. Have a look at the line that causes the error here.
Make sure you're passing the right index name to the awaitGraphIndexStatus.
Related
I need to constantly increment values for a map field value in DynamoDB. The map will contain keys with counters, and for each update I want to atomically increment the keys. The corner case is that for this operation it needs to do an upsert, meaning, if the record/property doesn't exist it should be created, otherwise updated (a.k.a. incremented).
The given Java code below shows what I'm trying to achieve:
DynamoDbClient client = DynamoDbClient.builder()
.credentialsProvider(
StaticCredentialsProvider.create(
AwsBasicCredentials.create("test-key", "test-secret")))
.region(Region.EU_CENTRAL_1)
.endpointOverride(URI.create("http://localhost:4566"))
.build();
client.transactWriteItems(TransactWriteItemsRequest.builder()
.transactItems(TransactWriteItem.builder()
.update(Update.builder()
.tableName("my-table")
.key(Map.of("id", AttributeValue.builder().s("123").build()))
.updateExpression("""
SET itemId = if_not_exists(itemId, :itemId)
ADD #val.#country :value
""")
.expressionAttributeNames(Map.of(
"#val", "value",
"#country", "Portugal" // in other invocations this might be a different country
))
.expressionAttributeValues(Map.of(
":itemId", AttributeValue.builder().s("1234").build(),
":value", AttributeValue.builder().n("1").build()))
.build())
.build())
.build());
Every time I run this operation I get the following error:
Exception in thread "main" software.amazon.awssdk.services.dynamodb.model.DynamoDbException: The document path provided in the update expression is invalid for update (Service: DynamoDb, Status Code: 400, Request ID: XXX)
Does somebody know how I could achieve this functionality?
All of my firestore transactions fail when I want to get document.
I've tried getting other files changed rules to be public. I've found out that when i use if checking it seems like get function returned data.
val currentUserDocument = firebaseFirestore.collection("user").document(firebaseAuth.currentUser!!.uid)
val classMemberDocument = firebaseFirestore.collection("class").document(remoteClassID).collection("member").document(firebaseAuth.currentUser!!.uid)
firebaseFirestore.runTransaction { transaction ->
val userSnapshot = transaction.get(currentUserDocument)
val isInClass = userSnapshot.getBoolean("haveRemoteClass")!!
val classID = userSnapshot.getString("remoteClassID")!!
if (isInClass == true && classID == remoteClassID) {
transaction.update(currentUserDocument, "haveRemoteClass", false)
transaction.update(currentUserDocument, "remoteClassID", "")
transaction.delete(classMemberDocument)
} else {
throw FirebaseFirestoreException("You aren't in this class!", FirebaseFirestoreException.Code.ABORTED)
}
null
}
This typically means that the data that you're using in the transaction is seeing a lot of contention.
Each time you run a transaction, Firebase determines the current state of all documents you use in the transaction, and sends that state and the new state of those documents to the server. If the documents that you got were changed between when the transaction started and when the server gets it, it rejects the transaction and the client retries.
For the client to fail like this, it has to retry more often than is reasonable. Consider reducing the scope of your transaction to cover fewer documents, or find another way to reduce contention (such as the approach outlined for distributed counters).
I have some ContractState, and there are two commands that can 'delete' the state (mark it as historic, with no new state to replace it) - let's say 'Delete' and 'Revoke', which have different real-world consequences.
I can still see the historic states in the vault, right? How can I determine which command deleted the state? I suppose I could add some enum to the state: 'Active|Deleted|Revoked', and then move the state from S(Active) -> S(Deleted|Revoked) -> Historic. But that seems clunky.
In theory, you could determine which command was used to consume the state by checking the transaction that consumed the state.
However, there is currently no non-deprecated API for viewing the contents of the node's transaction storage as a node owner. This is because in a future version of Corda, we expect transaction resolution to occur within a secure guard extension (SGX). This will affect which transactions are visible, and Corda therefore cannot commit to a stable API for viewing the contents of the node's transaction storage.
If you're willing to use the deprecated internalVerifiedTransactionsSnapshot/internalVerifiedTransactionsFeed API, you could do something like:
val (_, vaultUpdates) = proxy.vaultTrackBy<ContractState>()
vaultUpdates.toBlocking().subscribe { update ->
update.produced.forEach { stateAndRef ->
val newStateTxId = stateAndRef.ref.txhash
val transactions = proxy.internalVerifiedTransactionsSnapshot()
val transaction = transactions.find { transaction -> transaction.id == newStateTxId }!!
val commands = transaction.tx.commands
}
}
An alternative would be to add a status field to the state, and updating the status field instead of marking it as consumed when "deleting" it. For example:
class MyState(val expired: Boolean): ContractState {
override val participants = TODO()
}
We're using Firebase DB together with RxSwift and are running into problems with transactions. I don't think they're related to the combination with RxSwift but that's our context.
Im observing a data in Firebase DB for any value changes:
let child = dbReference.child(uniqueId)
let dbObserverHandle = child.observe(.value, with: { snapshot -> () in
guard snapshot.exists() else {
log.error("empty snapshot - child not found in database")
observer.onError(FirebaseDatabaseConsumerError(type: .notFound))
return
}
//more checks
...
//read the data into our object
...
//finally send the object as Rx event
observer.onNext(parsedObject)
}, withCancel: { _ in
log.error("could not read from database")
observer.onError(FirebaseDatabaseConsumerError(type: .databaseFailure))
})
No problems with this alone. Data is read and observed without any problems. Changes in data are propagated as they should.
Problems occur as soon as another part of the application modifies the data that is observer with a transaction:
dbReference.runTransactionBlock({ (currentData: FIRMutableData) -> FIRTransactionResult in
log.debug("begin transaction to modify the observed data")
guard var ourData = currentData.value as? [String : AnyObject] else {
//seems to be nil data because data is not available yet, retry as stated in the transaction example https://firebase.google.com/docs/database/ios/read-and-write
return TransactionResult.success(withValue: currentData)
}
...
//read and modify data during the transaction
...
log.debug("complete transaction")
return FIRTransactionResult.success(withValue: currentData)
}) { error, committed, _ in
if committed {
log.debug("transaction commited")
observer(.completed)
} else {
let error = error ?? FirebaseDatabaseConsumerError(type: .databaseFailure)
log.error("transaction failed - \(error)")
observer(.error(error))
}
}
The transaction receives nil data at first try (which is something you should be able to handle. We just just call
return TransactionResult.success(withValue: currentData)
in that case.
But this is propagated to the observer described above. The observer runs into the "empty snapshot - child not found in database" case because it receives an empty snapshot.
The transaction is run again, updates the data and commits successfully. And the observer receives another update with the updated data and everything is fine again.
My questions:
Is there any better way to handle the nil-data during the transaction than writing it to the database with FIRTransactionResult.success
This seems to be the only way to complete this transaction run and trigger a re-run with fresh data but maybe I'm missing something-
Why are we receiving the empty currentData at all? The data is obviously there because it's observed.
The transactions seem to be unusable with that behavior if it triggers a 'temporary delete' to all observers of that data.
Update
Gave up and restructured the data to get rid of the necessity to use transactions. With a different datastructure we were able to update the dataset concurrently without risking data corruption.
I have a problem with transactions. The data in the transaction is always null and the update handler is called only singe once. The documentation says :
To accomplish this, you pass transaction() an update function which is
used to transform the current value into a new value. If another
client writes to the location before your new value is successfully
written, your update function will be called again with the new
current value, and the write will be retried. This will happen
repeatedly until your write succeeds without conflict or you abort the
transaction by not returning a value from your update function
Now I know that there is no other client accessing the location right now. Secondly if I read the documentation correctly the updateCounters function should be called multiple times should it fail to retrieve and update data.
The other thing - if I take out the condition if (counters === null) the execution will fail as counters is null but on a subsequent attempt the transaction finishes fine - retrieves data and does the update.
simple once - set on this location work just fine but it is not safe.
Please what do I miss?
here is the code
self.myRef.child('counters')
.transaction(function updateCounters(counters){
if (counters === null) {
return;
}
else {
console.log('in transaction counters:', counters);
counters.comments = counters.comments + 1;
return counters;
}
}, function(error, committed, ss){
if (error) {
console.log('transaction aborted');
// TODO error handling
} else if (!committed){
console.log('counters are null - why?');
} else {
console.log('counter increased',ss.val());
}
}, true);
here is the data in the location
counters:{
comments: 1,
alerts: 3,
...
}
By returning undefined in your if( ... === null ) block, you are aborting the transaction. Thus it never sends an attempt to the server, never realizes the locally cached value is not the same as remote, and never retries with the updated value (the actual value from the server).
This is confirmed by the fact that committed is false and the error is null in your success function, which occurs if the transaction is aborted.
Transactions work as follows:
pass the locally cached value into the processing function, if you have never fetched this data from the server, then the locally cached value is null (the most likely remote value for that path)
get the return value from the processing function, if that value is undefined abort the transaction, otherwise, create a hash of the current value (null) and pass that and the new value (returned by processing function) to the server
if the local hash matches the server's current hash, the change is applied and the server returns a success result
if the server transaction is not applied, server returns the new value, client then calls the processing function again with the updated value from the server until successful
when ultimately successful, and unrecoverable error occurs, or the transaction is aborted (by returning undefined from the processing function) then the success method is called with the results.
So to make this work, obviously you can't abort the transaction on the first returned value.
One workaround to accomplish the same result--although it is coupled and not as performant or appropriate as just using the transactions as designed--would be to wrap the transaction in a once('value', ...) callback, which would ensure it's cached locally before running the transaction.