I'm currently trying my hand at Jetpack Compose and Firebase with the MVVM architecture. In the process, I came across a question that I can't answer with google.
My project is structured as follows:
TaskModel -> data class
TaskListModel -> data class (contains a
collection of tasks)
(TaskListModel -> TaskModel = 1:n)
TaskViewModel -> uses TaskRepository (insert, update, get, ...)
TaskListViewModel -> uses TaskListRepository (insert, update, get,
...)
I have learned that a ViewModel is passed to the respective screen by the navigation, so that the screen gets access to the data.
My question:
How do I access the TaskModels?
Option 1: Should I do it via the TaskListViewModel, since a TaskList contains a collection of TaskModels?
Option 2: Should I do it via the TaskViewModel by passing the uuid of the TaskList and get the data based on that?
But no matter which option makes more sense, either the collection in the TaskList becomes superfluous or the TaskViewModel becomes superfluous.
Related
In the samples for the Cosmos DB SQL API, there are a couple of uses of Database.ReadAsync() which don't seem to be doing anything useful. The remarks second in the method's documentation doesn't really indicate what it might be used for either.
What is the reason for using it in these cases? When would you typically use it?
ChangeFeed/Program.cs#L475 shows getting a database then calling ReadAsync to get another reference to the database
database = await client.GetDatabase(databaseId).ReadAsync();
await database.DeleteAsync();
which seems to be functionally the same as
database = client.GetDatabase(databaseId);
await database.DeleteAsync();
throwing the same exception if the database is not found.
and DatabaseManagement/Program.cs#L80-L83
DatabaseResponse readResponse = await database.ReadAsync();
Console.WriteLine($"\n3. Read a database: {readResponse.Resource.Id}");
await readResponse.Database.CreateContainerAsync("testContainer", "/pk");
which seems to be equivalent to:
Console.WriteLine($"\n3. Read a database: {database.Id}");
await database.CreateContainerAsync("testContainer", "/pk");
producing the same output and creating the container as before.
You are correct that those samples might need polishing, the main difference is:
GetDatabase just gets a proxy object, it does not mean the database actually exists. If you attempt an operation on a database that does not exist, for example, CreateContainer, it can fail with a 404.
ReadAsync will read the DatabaseProperties and allow you obtain any information from there and also would succeed if the database actually exists. Does that guarantee that if I call CreateContainer right away it will succeed? No, because the database could have been deleted right in the middle.
So in summary, ReadAsync is good if you want to get any of the DatabaseProperties or if you want to for some reason verify the database exists.
Most common scenarios would just use GetDatabase because you are probably attempting operations down the chain (like creating a container or executing item level operations in some container in that database).
Short Answer
Database.ReadAsync(...) is useful for reading database properties.
The Database object is useful for performing operations on the database, such as creating a container via Database.CreateContainerIfNotExistsAsync(...).
A bit more detail
The Microsoft Docs page for Database.ReadAsync is kind of confusing and not well written in my opinion:
The definition says:
Reads a DatabaseProperties from the Azure Cosmos service as an
asynchronous operation.
However, the example shows ReadAsync returning a DatabaseResponse object, not a DatabaseProperties object:
// Reads a Database resource where database_id is the ID property of the Database resource you wish to read.
Database database = this.cosmosClient.GetDatabase(database_id);
DatabaseResponse response = await database.ReadAsync();
It's only after a bit more digging that things become clearer. When you look at the documentation page for the DatabaseResponse Class it says the inheritance chain for DatabaseResponse is:
Inheritance: Object -> Response<DatabaseProperties> -> DatabaseResponse
If you then have a look at the Docs page for the Response<T> Class you'll see there is an implicit operator that converts Response<T> to T:
public static implicit operator T (Microsoft.Azure.Cosmos.Response<T> response);
So that means that even though the ReadAsync method returns a DatabaseResponse object, that is implicitly converted to a DatabaseProperties object (since DatabaseResponse inherits Response<DatabaseProperties>).
So Database.ReadAsync is useful for reading database properties.
The Docs page for Database.ReadAsync could have clearer about the implicit link between the DatabaseResponse object returned by the method and the DatabaseProperties object that it wraps.
I was taking a look at how hierarchical data works in Cloud Firestore, and was wondering how that would best translate into a Go struct.
In the linked example, there is a collection of chat rooms, and each chat room document has two fields: a name and a collection of messages.
Would the following be a good way of representing a chat room using a go struct, given that there will be fairly frequent writes to and reads from the messages collection? I would also want to access the messages in the collection in the Go code.
type ChatRoom struct {
Name string
Messages *firestore.CollectionRef
}
This definition seems to compile and works fine, but I was wondering if there are better or more idiomatic ways of going about this.
The documents don't mention a CollectionRef being supported in a document. So I am not sure if that will work. You can see what I am referring to here.
On the other hand I don't think you really gain anything from it since you can access the collection by doing the following.
client.Collection("chatroom/" + <NAME> + "/messages")
Also I don't think it is good practice to entangle a high level type like ChatRoom to a firestore implementation. So I would remove it and create an interface that hides the details of how ChatRooms and Messages are stored. You can do something like this.
type Repo interface {
GetChatRoom(name string) (ChatRoom, error)
GetMessages(name string) ([]Messages, error)
}
I'm following along the with the basic AngularFire2 docs, and the general format seems to be:
const items = af.database.list('/items');
// to get a key, check the Example app below
items.update('key-of-some-data', { size: newSize });
My confusion is that in the source code, it seems as though calling database.list() grabs all the data at the listed url (line 114 here)
Can anyone help clarify how that works? If it does indeed grab all the data, is there a better way of getting a reference without doing that? Or should I just reference each particular URL individually?
Thanks!
When you create an AngularFire2 list, it holds an internal Firebase ref - accessible via the list's public $ref property.
The list is an Observable - which serves as the interface for reading from the database - and includes some additional methods for writing to the database: push, update and remove.
In the code in your question, you are only calling the update method and are not subscribing to the observable, so no data is loaded from the database into memory:
const items = af.database.list('/items');
// to get a key, check the Example app below
items.update('key-of-some-data', { size: newSize });
It's only when a subscription to the observable is made that listeners for value and the child_... events are added to the ref and the list builds and maintains an internal array that's emitted via the observable. So if you are only calling the methods that write to the database, it won't be loading any data.
The AngularFire2 object is implemented in a similar manner.
I want to take a list of item names from a collection as a simple array to use for things like autocompleting user input and checking for duplicates. I would like this list to be reactive so that changes in the data will be reflected in the array. I have tried the following based on the Meteor documentation:
setReactiveArray = (objName, Collection, field) ->
update = ->
context = new Meteor.deps.Context()
context.on_invalidate update
context.run ->
list = Collection.find({},{field: 1}).fetch()
myapp[objName] = _(list).pluck field
update()
Meteor.startup ->
if not app.items?
setReactiveArray('items', Items, 'name')
#set autocomplete using the array
Template.myForm.set_typeahead = ->
Meteor.defer ->
$('[name="item"]').typeahead {source: app.items}
This code seems to work, but it kills my app's load time (takes 5-10 seconds to load on dev/localhost vs. ~1 second without this code). Am I doing something wrong? Is there a better way to accomplish this?
You should be able to use Items.find({},{name: 1}).fetch(), which will return an array of items and is reactive, so it will re-run its enclosing function whenever the query results change, as long as it's called in a reactive context.
For the Template.myForm.set_typeahead helper, you might want to call that query inside the helper itself, store the result in a local variable, and then call Meteor.defer with a function that references that variable. Otherwise I'm not sure that the query will be inside a reactive context when it's called.
Edit: I have updated the code below both because it was fragile, and to put it in context so it's easier to test. I have also added a caution - in most cases, you will want to use #zorlak's or #englandpost's methods (see below).
First of all, kudos to #zorlak for digging up my old question that nobody answered. I have since solved this with a couple of insights gleaned from #David Wihl and will post my own solution. I will hold off on selecting the correct answer until others have a chance to weigh in.
#zorlak's answer solves the autocomplete issue for a single field, but as stated in the question, I was looking for an array that would update reactively, and the autocomplete was just one example of what it would be used for. The advantage of having this array is that it can be used anywhere (not just in template helpers) and that it can be used multiple times in the code without having to re-execute the query (and the _.pluck() that reduces the query to an array). In my case, this array ends up in multiple autocomplete fields as well as validation and other places. It's possible that the advantages I'm putting forth are not significant in most Meteor apps (please leave comments), but this seems like an advantage to me.
To make the array reactive, simply build it inside a Meteor.autorun() callback - it will re-execute any time the target collection changes (but only then, avoiding repetitive queries). This was the key insight I was looking for. Also, using the Template.rendered() callback is cleaner and less of a hack than the set_typeahead template helper I used in the question. The code below uses underscore.js's _.pluck() to extract the array from the collection and uses Twitter bootstrap's $.typeahead() to create the autocomplete.
Updated code: I have edited the code so you can try this with a stock meteor created test environment. Your html will need a line <input id="typeahead" /> in the 'hello' template. #Items has the # sign to make Items available as a global on the console (Meteor 0.6.0 added file-level variable scoping). That way you can enter new items in the console, such as Items.insert({name: "joe"}), but the # is not necessary for the code to work. The other necessary change for standalone use is that the typeahead function now sets the source to a function (->) so that it will query items when activated instead of being set at rendering, which allows it to take advantage of changes to items.
#Items = new Meteor.Collection("items")
items = {}
if Meteor.isClient
Meteor.startup ->
Meteor.autorun ->
items = _(Items.find().fetch()).pluck "name"
console.log items #first result will be empty - see caution below
Template.hello.rendered = ->
$('#typeahead').typeahead {source: -> _(Items.find().fetch()).pluck "name"}
Caution! The array we created is not itself a reactive data source. The reason that the typeahead source: needed to be set to a function -> that returned items is that when Meteor first starts, the code runs before Minimongo has gotten its data from the server, and items is set to an empty array. Minimongo then receives its data, and items is updated You can see this process if you run the above code with the console open: console.log items will log twice if you have any data stored.
Template.x.rendered() calls don't don't set a reactivity context and so won't retrigger due to changes in reactive elements (to check this, pause your code in the debugger and examine Deps.currentComputation -- if it's null, you are not in a reactive context and changes to reactive elements will be ignored). But you might be surprised to learn that your templates and helpers will also not react to items changing -- a template using #each to iterate over items will render empty and never rerender. You could make it act as a reactive source (the simplest way being to store the result with Session.set(), or you can do it yourself), but unless you are doing a very expensive calculation that should be run as seldom as possible, you are better off using #zorlak's or #englandpost's methods. It may seem expensive to have your app querying the database repetitively, but Minimongo is caching the data locally, avoiding the network, so it will be quite fast. Thus in most situations, it's better just to use
Template.hello.rendered = ->
$('#typeahead').typeahead {source: -> _(Items.find().fetch()).pluck "name"}
unless you find that your app is really bogging down.
here is my quick solution for bootstrap typeahead
On client side:
Template.items.rendered = ->
$("input#item").typeahead
source: (query, process) ->
subscription = Meteor.subscribe "autocompleteItems", query, ->
process _(Items.find().fetch()).pluck("name")
subscription.stop() # here may be a bit different logic,
# such as keeping all opened subsriptions until autocomplete
# will be successfully completed and so on
items: 5
On server side:
Meteor.publish "autocompleteItems", (query) ->
Items.find
name: new RegExp(query, "i"),
fields: { name: 1 },
limit: 5
I actually ended up approaching the autocompletion problem completely differently, using client-side code instead of querying servers. I think this is superior because Meteor's data model allows for fast multi-rule searching with custom rendered lists.
https://github.com/mizzao/meteor-autocomplete
Autocompleting users with #, where online users are shown in green:
In the same line, autocompleting something else with metadata and bootstrap icons:
Please fork, pull, and improve!
When designing a program in a functional style, I think about designing a base layer of functions that operate on a single object. Then, if I need to operate on a collection of those objects I start building on top of that base layer using traditional functional glue like mapping, filtering, reducing, etc.
For example, lets say I have a DB backed application that has Users and Tasks, where Users are assigned Tasks.
I may have a function defined like
def doesUserPerformTask?(taskId, userId)
// Go to DB to see if this userId does this taskId
// return userid if success or else nil
end
Later down the road, I am given a list of user id's and want to know which of them perform task X. Perfect, I already have the function doesUserPerformTask? and it has been battle tested all over other places in the code, so I can just map over the user id list and call that function for each of them and then filter the results.
While this is a great benefit of functional design, I have an efficiency problem that each element (i.e. user id) passed to map requires a DB hit. I now need to create an entirely new function that operates on a list of userId's.
I keep running into this problem when designing DB backed programs in a functional style, where I keep having to write new functions that don't build off the base layer of functions and ending up with lots of functions written specifically for both operating on single items and collections of items.
Is there a better way to organize DB backed programs written in a functional style?
Why not pass the actual object into the function?
def doesUserPerformTask?(task, user)
// Check the user object directly
// return true or (false|nil)
end
Then write a wrapper that will fetch the user and task from the DB
def doesUserPerformTaskFromDB?(taksId, userId)
// DB calls here
if doesUserPerformTask?(task, user) ? user.id : nil
end
Then write a wrapper for collections
def whichUsersPerformTask?(task)
// fetch users from DB
// map non-db function over collection
end
Then again, unless you're going to use that user collection for something else wouldn't it be better to depend upon the DB query to get the users you need (via whichever query language)? Seems like there are a few options that make this both efficient and DRY.