I'm trying to understand when I should call Query.close(Object) or Query.closeAll();
From the docs I see that it will "close a query result" and "release all resources associated with it," but what does "close" mean? Does it mean I can't use objects that I get out of a Query after I've called close on them?
I ask because I'm trying to compartmentalize my code with functions like getUser(id), which will create a Query, fetch the User, and destroy the query. If I have to keep the Query around just to access the User, then I can't really do that compartmentalization.
What happens if I don't call close on an object? Will it get collected as garbage? Will I be able to access that object later?
Please let me know if I can further specify my question somehow.
You can't use the query results since you closed them (i.e the List object you got back from query.execute). You can access the objects in the results ... if you copied them to your own List, or made references to them in your code. Failure to call close can leak memory
When your query method returns a single object it is easy to simply close the query before returning the single object.
On the other hand, when your query method returns a collection the query method itself can not close the query before returning the result because the query needs to stay open while the caller is iterating through the results.
This puts the responsibility for closing a query that returns a collection on the caller and can introduce leaks if the caller neglects to close the query - I thought there must be a safer way and there is!
Guido, a long time DataNucleus user, created a 'auto closing' collection facade that wraps the collection returned by JDO's Query.execute method. Usage is extremely simple: Wrap the query result inside an instance of the auto closing collection object:
Instead of returning the Query result set like this:
return q.execute();
simply return an 'auto closing' wrapped version of it:
return new JDOQueryResultCollection(q, q.execute());
To the caller it appears like any other Collection but the wrapper keeps a reference to the query that created the collection result and auto closes it when the wrapper is disposed of by the GC.
Guido kindly gave us permission to include his clever auto closing code in our open source exPOJO library. The auto closing classes are completely independent of exPOJO and can be used in isolation. The classes of interest are in the expojo_jdo*.jar file that can be downloaded from:
http://www.expojo.com/
JDOQueryResultCollection is the only class used directly but it does have a few supporting classes.
Simply add the jar to your project and import com.sas.framework.expojo.jdo.JDOQueryResultCollection into any Java file that includes query methods that return results as a collection.
Alternatively you can extract the source files from the jar and include them in your project: They are:
JDOQueryResultCollection.java
Disposable.java
AutoCloseQueryIterator.java
ReadonlyIterator.java
Related
In the samples for the Cosmos DB SQL API, there are a couple of uses of Database.ReadAsync() which don't seem to be doing anything useful. The remarks second in the method's documentation doesn't really indicate what it might be used for either.
What is the reason for using it in these cases? When would you typically use it?
ChangeFeed/Program.cs#L475 shows getting a database then calling ReadAsync to get another reference to the database
database = await client.GetDatabase(databaseId).ReadAsync();
await database.DeleteAsync();
which seems to be functionally the same as
database = client.GetDatabase(databaseId);
await database.DeleteAsync();
throwing the same exception if the database is not found.
and DatabaseManagement/Program.cs#L80-L83
DatabaseResponse readResponse = await database.ReadAsync();
Console.WriteLine($"\n3. Read a database: {readResponse.Resource.Id}");
await readResponse.Database.CreateContainerAsync("testContainer", "/pk");
which seems to be equivalent to:
Console.WriteLine($"\n3. Read a database: {database.Id}");
await database.CreateContainerAsync("testContainer", "/pk");
producing the same output and creating the container as before.
You are correct that those samples might need polishing, the main difference is:
GetDatabase just gets a proxy object, it does not mean the database actually exists. If you attempt an operation on a database that does not exist, for example, CreateContainer, it can fail with a 404.
ReadAsync will read the DatabaseProperties and allow you obtain any information from there and also would succeed if the database actually exists. Does that guarantee that if I call CreateContainer right away it will succeed? No, because the database could have been deleted right in the middle.
So in summary, ReadAsync is good if you want to get any of the DatabaseProperties or if you want to for some reason verify the database exists.
Most common scenarios would just use GetDatabase because you are probably attempting operations down the chain (like creating a container or executing item level operations in some container in that database).
Short Answer
Database.ReadAsync(...) is useful for reading database properties.
The Database object is useful for performing operations on the database, such as creating a container via Database.CreateContainerIfNotExistsAsync(...).
A bit more detail
The Microsoft Docs page for Database.ReadAsync is kind of confusing and not well written in my opinion:
The definition says:
Reads a DatabaseProperties from the Azure Cosmos service as an
asynchronous operation.
However, the example shows ReadAsync returning a DatabaseResponse object, not a DatabaseProperties object:
// Reads a Database resource where database_id is the ID property of the Database resource you wish to read.
Database database = this.cosmosClient.GetDatabase(database_id);
DatabaseResponse response = await database.ReadAsync();
It's only after a bit more digging that things become clearer. When you look at the documentation page for the DatabaseResponse Class it says the inheritance chain for DatabaseResponse is:
Inheritance: Object -> Response<DatabaseProperties> -> DatabaseResponse
If you then have a look at the Docs page for the Response<T> Class you'll see there is an implicit operator that converts Response<T> to T:
public static implicit operator T (Microsoft.Azure.Cosmos.Response<T> response);
So that means that even though the ReadAsync method returns a DatabaseResponse object, that is implicitly converted to a DatabaseProperties object (since DatabaseResponse inherits Response<DatabaseProperties>).
So Database.ReadAsync is useful for reading database properties.
The Docs page for Database.ReadAsync could have clearer about the implicit link between the DatabaseResponse object returned by the method and the DatabaseProperties object that it wraps.
I recognized that (insert/delete)-XQueries executed with the BaseX client always returning an empty string. I find this very confusing or unintuitive.
Is there a way to find out if the query was "successful" without querying the database again (and using potentially buggy "transitive" logic like "if I deleted a node, there must be 'oldNodeCount-1' nodes in the XML")?
XQuery Update statements do not return anything -- that's how they are defined. But you're not the only one who does not like those restrictions, and BaseX added two ways around this limitation:
Returning Results
By default, it is not possible to mix different types of expressions
in a query result. The outermost expression of a query must either be
a collection of updating or non-updating expressions. But there are
two ways out:
The BaseX-specific update:output() function bridges this gap: it caches the results of its arguments at runtime and returns them after
all updates have been processed. The following example performs an
update and returns a success message:
update:output("Update successful."), insert node <c/> into doc('factbook')/mondial
With the MIXUPDATES option, all updating constraints will be turned off. Returned nodes will be copied before they are modified by
updating expressions. An error is raised if items are returned within
a transform expression.
If you want to modify nodes in main memory, you can use the transform
expression.
The transform expression will not help you, as you seem to modify the data on disk. Enabling MIXUPDATES allows you to both update the document and return something at the same time, for example running something like
let $node := <c/>
return ($node, insert node $node into doc('factbook')/mondial)
MIXUPDATES allows you to return something which can be further processed. Results are copied before being returned, if you run multiple updates operations and do not get the expected results, make sure you got the concept of the pending update list.
The db:output() function intentionally breaks its interface contract: it is defined to be an updating function (not having any output), but at the same time it prints some information to the query info. You cannot further process these results, but the output can help you debugging some issues.
Pending Update List
Both ways, you will not be able to have an immediate result from the update, you have to add something on your own -- and be aware updates are not visible until the pending update list is applied, ie. after the query finished.
Compatibility
Obviously, these options are BaseX-specific. If you strongly require compatible and standard XQuery, you cannot use these expressions.
In Meteor, I got a collection that the client subscribes to. In some cases, instead of publishing the documents that exists in the collection on the server, I want to send down some bogus data. Now that's fine using the this.added function in the publish.
My problem is that I want to treat the bogus doc as if it were a real document, specifically this gets troublesome when I want to update it. For the real docs I run a RealDocs.update but when doing that on the bogus doc it fails since there is no representation of it on the server (and I'd like to keep it that way).
A collection API that allowed me to pass something like local = true this would be fantastic but I have no idea how difficult that would be to implement and I'm not to fond of modifying the core code.
Right now I'm stuck at either creating a BogusDocs = new Meteor.Collection(null) but that makes populating the Collection more difficult since I have to either hard code fixtures in the client code or use a method to get the data from the server and I have to make sure I call BogusDocs.update instead of RealDocs.update as soon as I'm dealing with bogus data.
Maybe I could actually insert the data on the server and make sure it's removed later, but the data really has nothing to do with the server side collection so I'd rather avoid that.
Any thoughts on how to approach this problem?
After some further investigation (the evented mind site) it turns out that one can modify the local collection without making calls to the server. This is done by running the same methods as you usually would, but on MyCollection._collection instead of just on Collection. MyCollection.update() would thus become MyCollection._collection.update(). So, using a simple wrapper one can pass in the usual arguments to a update call to update the collection as usual (which will try to call the server which in turn will trigger your allow/deny rules) or we can add 'local' as the last argument to only perform the update in the client collection. Something like this should do it.
DocsUpdateWrapper = function() {
var lastIndex = arguments.length -1;
if (arguments[lastIndex] === 'local') {
Docs._collection.update(arguments.slice(0, lastIndex);
} else {
Docs.update(arguments)
}
}
(This could of course be extended to a DocsWrapper that allows for insertion and removals too.)(Didnt try this function yet but it should serve well as an example.)
The biggest benefit of this is imo that we can use the exact same calls to retrieve documents from the local collection, regardless of if they are local or living on the server too. By adding a simple boolean to the doc we can keep track of which documents are only local and which are not (An improved DocsWrapper could check for that bool so we could even omit passing the 'local' argument.) so we know how to update them.
There are some people working on local storage in the browser
https://github.com/awwx/meteor-browser-store
You might be able to adapt some of their ideas to provide "fake" documents.
I would use the transform feature on the collection to make an object that knows what to do with itself (on client). Give it the corruct update method (real/bogus), then call .update rather than a general one.
You can put the code from this.added into the transform process.
You can also set up a local minimongo collection. Insert on callback
#FoundAgents = new Meteor.Collection(null, Agent.transformData )
FoundAgents.remove({})
Meteor.call 'Get_agentsCloseToOffer', me, ping, (err, data) ->
if err
console.log JSON.stringify err,null,2
else
_.each data, (item) ->
FoundAgents.insert item
Maybe this interesting for you as well, I created two examples with native Meteor Local Collections at meteorpad. The first pad shows an example with plain reactive recordset: Sample_Publish_to_Local-Collection. The second will use the collection .observe method to listen to data: Collection.observe().
I want to tell if an XML document has been constructed (e.g. using xdmp:unquote) or has been retrieved from a database. One method I have tried is to check the document-uri property
declare variable $doc as document-node() external;
if (fn:exists(fn:document-uri($doc))) then
'on database'
else
'in memory'
This seems to work well enough but I can't see anything in the MarkLogic documentation that guarantees this. Is this method reliable? Is there some other technique I should be using?
I think that behavior has been stable for a while. You could always check for the URI too, as long as you expect it to be from the current database:
xdmp:exists(fn:doc(fn:document-uri($doc)))
Or if you are in an update context and need ACID guarantees, use fn:exists.
The real test would be to try to call xdmp:node-replace or similar, and catch the expected error. Those node-level update functions do not work on constructed nodes. But that requires an update context, and might be tricky to implement in a robust way.
If your XML document is in-memeory, you can use in-mem-update API
import module namespace mem = "http://xqdev.com/in-mem-update" at "/MarkLogic/appservices/utils/in-mem-update.xqy";
If your XML document exists in your database you can use fn:exists() or fn:doc-available()
The real test of In-memory or In-Db is xdmp:node-replace .
If you are able to replace , update , delete a node then it is in database else if it throws exception then it's not in database.
Now there are two situation
1. your document is not created at all:
you can use fn:empty() to check if it is created or not.
2. Your document is created and it's in memory:
if fn:empty() returns false and xdmp:node-replace throws exception then it's in-memory
My problem, simplified:
I have a dataGrid with a dataProvider "documents"
A column of the datagrid has a labelFunction that gets the project_id field of the document, and returns the project name, from a bindable variable "projects"
Now, I dispatch the events to download from the server the documents and the projects, but If the documents get downloaded before the projects, then the label function gives an error (no "projects" variable)
Therefore, I must serialize the commands being executed: the getDocuments command must execute only after the getProjects command.
In the real world, though, I have dozens of resources being downloaded, and those command are not always grouped together (so I can't for example execute the second command from the onSuccess() method of the first, because not always they must be executed together..)..
I need a simple solution.. I need an idea..
If I understand you correctly, you need to serialize the replies from the server. I have done that by using AsyncToken.
The approach: Before you call the remote function, add a "token" to it. For instance, an id. The reply from the server for that particular call will then include that token. That way you can keep several calls separate and create chains of remote calls.
It's quite cool actually:
service:RemoteObject;
// ..
var call:AsyncToken = service.theMethod.send();
call.myToken = "serialization id";
private function onResult(event:ResultEvent):void
{
// Fetch the serialization id and do something with it
var serId:String = event.token.myToken;
}