I am trying to understand if delete document function in firestore(flutter) return any information about the deletion like if the item is not been found? and the deletion hasn't come through?
There is no precondition that requires the file to exist before it can be deleted. If the file didn't exist in the first place, or was deleted already, Firestore considers the operation successful.
If you want to handle the case where the file doesn't exist different in your application, you should use a transaction to delete the file and check for its existence inside the transaction handler.
Related
on delete of a document, by default the workflow is left in a hanging state and the reference of the document gets removed from the workflow side (bpm_package).
I want to change it as follows: if a document has been deleted in the repository then all the workflow that are associated with it should get deleted (each workflow package will always have a single document)
I tried to implement this by using rule/action (items are deleted or leave this folder) - was able to find workflows in js and cancel them, but it does not delete the document nor the workflow. on checking the XHR request i was able to find out that a concurrency exception occurs between the action and onDelete policy.
how do i delete/cancel/close the associated workflows of a document
i'm using alfresco community 5.2
You need to create Behavior/Policie to achieve this task.
http://docs.alfresco.com/6.0/references/dev-extension-points-behaviors.html
You can use beforeDeleteNode/onDeleteNode behaviour and write logic here to delete workflow.
I have a CosmosDB collection with a number of different partitions. I want to delete all of the data in one of the partitions so I tried to run the command:
db.myCollection.deleteAll({PartitionKey: 'pop-9q'})
Where PartitionKey is the field that I partition/shard based on. But when I execute this it returns the not very helpful message:
ERROR: An Error has occurred
Why would I be getting this message and how can I either get more details on the cause or find a resolution?
Currently, at this time, you are unable to perform a bulk delete. Please Up Vote and Comment on this functionality: Add the ability to delete ALL data in a partition
Additionally, which API are you consuming? For Gremlin API you could execute something like the following: g.V().drop()
The Microsoft.Azure.Cosmos SDK has added this ability - currently only available as a preview feature (which requires you to opt-in via the portal)
See here for more details:
https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-delete-by-partition-key?tabs=dotnet-example
Sample code included there:
// Get reference to the container
var container = cosmosClient.GetContainer("DatabaseName", "ContainerName");
// Delete by logical partition key
ResponseMessage deleteResponse = await container.DeleteAllItemsByPartitionKeyStreamAsync(new PartitionKey("Contoso"));
if (deleteResponse.IsSuccessStatusCode) {
Console.WriteLine($"Delete all documents with partition key operation has successfully started");
}
As #Mike said, a "delete all data" feature is not supported yet in Cosmos db SQL API and Mongo API. I notice that you have already added comments in above link. I just provide you with a workaround here that using bulk delete stored procedure for Cosmos db SQL API.
(sample code: https://gist.github.com/deepumi/2a23c5380202bddf0b85e83baf5833be)
For Mongo API, unfortunately, even stored procedure is not supported. You could create an Azure HTTP Trigger Function to execute bulk delete code in the function whenever you want or merge it into your program code.
Hello I am trying to read a module with this code:
(: Entry point - must be a read-only query. :)
xdmp:invoke(
'/path/mydocument.xqy',
(xs:QName('var1'), 'test',
xs:QName('var2'), "response"))
I am new in MarkLogic, I am using groovy and the api to connect to it, but also I saw I can invoke the module with this and indeed I did but it returns me
your query returned an empty sequence
I want to know if I can query xs:QName('var1'), 'test', changing test with a wildcard or how can I get all the information from the file called /path/mydocument.xqy?
I tried to use this:
xdmp:document-get("/path/mydocument.xqy)
but it says the file is not found. Although, if I use invoke I can query it, but I don't know what are the values I have to pass. I was wondering if there is something like sql using %% or something to give me all the data.
To answer the first question: "I am trying to read a module "
IF the module is in the database, then you must query the Modules database in which the module resides.
If the module is in the filesystem then you cannot directly access its source as a document but you can by executing xdmp:filesystem-file()
Simplification:
With the Default configuration of the server and REST client, user placed modules are in the "Modules" database and user placed documents are in the "Documents" database. This means, if you do a GET (read a "Document") with no additional parameters, it will return documents from the "Documents" database. Assuming you are using the default configuration for client and server, this would result in the behavior you are seeing. E.g. your Module code is in the Modules database, doing a GET for it by name will search the Documents database and correctly not find it.
You don't mention, and I don't know, the groovy library being used, but the REST API itself and all implementations of general purpose ML REST client libraries I am familiar with have options for overriding the default database with another. If the groovy library supports that, then specify the "Modules" database for your query and it should return the module document. Note: content-type will be application/text not text/xml.
You can simplify things for testing by bypassing the libraries and simply use a browser and try a URL like this http://yourserver.com:8000/v1/documents?uri=/your/module.xqy&database=Modules
Ref: https://docs.marklogic.com/REST/GET/v1/documents
Making the appropriate changes to the path and server for your use.
If you are still confused, then you should start with the basic MarkLogic tutorials and work through them one by one. You will most likely succeed faster by doing this then jumping straight into coding you don't understand yet.
DETAIL:
Note: The default behaviour is to EXECUTE documents when doing a GET call, using the Modules database. Thus doing a GET of http://yourserver:8000/your/module.xqy will EXECUTE it not return its source.
You will notice the REST API has a uri query parameter. This is EXECUTING the REST API code on /v1/documents which in turn will read the document specified by the uri and database parameters and return it.
I guess I can use:
xdmp:invoke(/pview/get-pview-browse-profiles.xqy,
cts:and-query((
cts:element-value-query(
xs:QName("letter"),"*", "wildcarded"),
cts:element-value-query(
xs:QName("collection"),"*", "wildcarded"))))
although it doesn't return anything
I have users uploading files, and a Cloud Function responding by adding the uploaded file to the database, and planned on using the following path:
/files/{user-id}/{filename}
The reasoning being that if a file gets deleted, i can in the Cloud Function immediately get the reference to the database-reference.
However, i am not allowed to use certain characters in db-paths that are allowed in filenames (most specifically, a dot). How should this be set up, so that for a removed Storage-file I can immediately get the correct Database-path?
You could push() the path under /files/{uid} to create the entry, then orderByValue().equalTo(x) to find the entry later for deletion. This way, you won't have to worry about the contents of the file name.
I was testing some changes to my firebase and accidently submitted 100's of updates to a reference. Now when I try and delete the reference it then creates a new reference with different data.
I have tried deleting everything in the firebase but it will just keep creating a new reference.
In this specific example I used set() to add 5 random values to a user name Michael. The 5 random values were called 100's of times and now when I delete the Michael user to test again it already has a value queued up and recreates itself immediately. I looked at my upload usage and it showed a huge amount of data being uploaded at one point that coincides with this error.
Any idea how to remove these queued up changes?
Make sure to disconnect the client that is writing this data. I suspect somewhere you have a process running that is generating these writes.
If you can't stop the offending process for some reason, you could always modify your security rules to deny access to the client that's doing the writes (or if it's a server using a Firebase Secret to authenticate, you could revoke its secret).
I've had a similar issue - think it has to do with your session / caching.
Try logging out of firebase and back in - if the records are still there, make a backup of your security rules, then use:
{
"rules": {
".read": false,
".write": false
}
}
and delete them.