I am trying to deploy firestore indexes and I am getting the error __name__ only indexes are not supported.
I checked the DESCENDING indexes that are newly added and they seem to be fine. There are a bunch of composite indexes, I am not sure which one is causing the error.
What are the potential causes for this error?
To find the specific index that is causing the issue try this:
delete 1/2 of your indexes and run deploy. If you get the error again, repeat...
if you don't get the error, deploy the half you deleted, and repeat.
eventually you will get the specific index that is causing the error
Related
I'm using xtdb in a testing environment with a RocksDB backend. All was well until yesterday, when the system stopped ingesting new data. It tells me that this is because of "missing docs", and gives me the id of the allegedly missing doc, but since it is missing, that doesn't tell me much. I have a specific format for my xt/ids (basically type+guid) and this doesn't match that format, so I don't think this id is one of mine. Calling history on the entity id just gives me an empty vector. I understand the block on updates for consistency reasons, but how to diagnose and recover from this situation (short of trashing the database and starting again)? This would obviously be a massive worry were it to happen in production.
In the general case this "missing docs" error indicates a corrupted document store and the only proper resolution is to manually restore/recover based on a backup of the document store. This almost certainly implies some level of data loss.
However, there was a known bug in the transaction function logic prior to 1.22.0 which could intermittently produce this error (but without any genuine data loss), see https://github.com/xtdb/xtdb/commit/1c30550fb14bd6d09027ff902cb00021bd6e57c4
However, if you weren't using transaction functions then there may be another unknown explanation.
I am in a situation where I have to create a document if it didn't exist in a collection, or delete it if exists.
In order to solve this situation, I have thought to:
1. Read doc
2. If !doc.exists -> create it
3. Else -> delete it
But, maybe, it will be cheaper to do:
1. Try to create a doc.
2. If fails because it exists, delete it.
I have been looking the documentation https://firebase.google.com/docs/firestore/pricing but I don't find anything related to unseccesful operations. Will I be charged with a document creation if it fails?
The create, though failed, is still counts as a write operation.
Unfortunately I am unable to provide steps to observe this behavior. The response is based on my own use of Firestore and seeing write counts increase with failed writes. It is not clear if all failure types will increase the write count or what the needed conditions are. Unfortunately this information is not available from GCP documentation.
We keep getting this exception in our app which has a scheduled job to read on a gloabl secondary index. Looks like it keeps backfilling periodically even though there were no changes on the table. The volumes on our table are quite low so a bit surprised to see this a few times a day.
This is not a new index, so wondering should it not backfill only on insert/update of records.
Anyone seen this before?
It might be creating that GSI. Wait for sometime based on the amount of data in your DB. And this issue will go away.
I just waited 30 seconds, the error was gone away automatically. I edited my dynamoDB table from my aws console directly, I think this temporary error originated from this.
This error occurs when you newly create a GSI in dynamodb Table. wait for sometimes once it get created into the table you will not see the error
see in below image it says Creating...
it will be like
then hit your function you will not get an error
Try Detecting and Correcting Index Key Violations. I guess it is due to Index key Violation.
I am using riak version 1.4.10 and it is in a ring with two hosts. I am unable to get rid of keys left over from previous operations using simple delete operations on keys. When I list the keys for a bucket, it shows me the old keys, however if I try to retrieve the data associated with a key, no data is found. When I try to delete the key, it still persists. What could be the cause of this? Is there a way to wipe the keys in the bucket so it starts from a clean slate? I don't care about any of the data in riak, but I would rather not have to reinstall everything again.
You are probably seeing the tombstones of the old data. Since Riak is an eventually consistent data store, it needs to keep track of deletes as if they were ordinary writes, at least for a little while.
If data is present on one node, but not another, how do you tell if it is a PUT that hasnt' propagated yet, or a DELETE?
Riak solves this by using a tombstone. Whenever you delete something, instead of just wiping the data immediately, Riak replaces the existing value with a special value that it knows means deleted. This special value contains a vclock that is descended from the previous value, and metadata indicating deleted. So when it comes time to decide the above question, Riak simply compares the vclock of the value with that of the tombstone. Whichever descends from the other must be the correct one.
To solve the problem of an ever growing data size that contains mostly tombstones, tombstones are reaped after a time. The time is set using the delete_mode setting. After the DELETE is processed, and the tombstone has been written to the primary vnodes, the delete process issues a GET request for the key. Whenever the GET process encounters a tombstone, and all of the primary vnodes responded with the same tombstone, it schedules the tombstone to be reaped according to the delete_mode setting.
So if you want to actually get rid of the tombstones, check your delete_mode setting to make sure it is not set to 'keep', and issue a get for each one to make sure it is really gone.
Or if you are just wiping the data store to restart your tests, stop Riak, delete all the files under the data_root for the backend you are using, and restart.
we're having some weird things happening with a cleanup cronjob and riak:
the objects we store (postboxes) have a 2i for modification date (which is a unix timestamp).
there's a cronjob running freqently deleting all postboxes that have not been modified within 180 days. however we've found evidence that postboxes that some (very little) postboxes that were modified in the last three days were deleted by this cronjob.
After reviewing and debugging several times over every line of code, I am confident, that this is not a problem of the cronjob.
I also traced back all delete calls to that bucket - and no one else is deleting objects there.
Of course I also checked with Riak to read the postboxes with r=ALL: they're definitely gone. (and they are stored with w=QUORUM)
I also checked the logs: updating the post boxes did succeed (there were no errors reported back from the write operations)
This leaves me with two possible causes for this:
riak loses data (which I am not willing to believe that easily)
the secondary indexes are corrupt and queries to them return wrong keys
So my questions are:
Can 2is actually break?
Is it possible to verify that?
Am I missing something completely different?
Cheers,
Matthias
Secondary index queries in Riak are coverage queries, which means that they will only use one of the stored replicas, and not perform a quorum read.
As you are writing with w=QUORUM, it is possible that one (or more) of the replicas may not get updated if you have n_val set to 3 or higher while the operation still is deemed successful. If this is the one selected for the coverage query, you could end up deleting based on the old value. In order to avoid this, you will need to perform updates with w=ALL.