DateTime objects in MeteorJS and Redux - meteor

What is best practice to deal with DateTime objects in MeteorJS with Redux?
Lately implemented the usage of Redux in my React Native + Meteor app for offline functionality following the blog post of Spencer Carli. But I have some problems with the way both systems store DateTime objects. I must admit that I have no detailed understanding of how date objects are handled in JavaScript.
Meteor
My Meteor collection model contains a datetype object (I use simpl-schema):
const Objects= new Mongo.Collection('objects');
Objects.schema = new SimpleSchema({
startedAt: Date,
});
Date presentation
In the react-native app I have to parse the date type to a string:
<Text>Started at: {object.startedAt.toUTCString()}</Text>
However, in disconnected mode the date is of type string ā€œ2017-02-11T09:00:00.000Zā€ so this parse fails
Insert items with a Date object
The insert of items in react-native:
Meteor.collection('objects').insert({
startedAt: new Date(),
}
);
This is accepted in disconnected mode, but when connection with the server is restored, insertion of items in the MongoDB is rejected.

Though not a great answer my primary suggestion, whenever using dates in Javascript, is to use the momentjs. It will save you a ton of time.
Otherwise, and I'm not sure if this is the "right" approach, I would do
<Text>Started at: {new Date(object.startedAt).toUTCString()}</Text>
that way the startedAt value will always be converted to a Date object, regardless of it it's already one or if it's a string.
Hope that helps!

Related

Getting data from Flutter Firebase Realtime Database when onChildAdded

I will first say that I am a firmware developer trying to make a web interface for a personal project. My question might be basic as this is my first web interface with a database. I have searched quite a bit on how to achieve what I am trying to do without success, so I suspect I am not approaching this the right way, but here it is:
Basically I have a device pushing data to a firebase realtime database :
the push function generates an unique Id for me automatically which is nice...
Now In my flutter interface, I want to display the latest entry on a widget so I am using the onChildAdded functionnality of flutterfire:
ref.onChildAdded.listen((DatabaseEvent event) {
print(event.snapshot.key);
print(event.snapshot.value);
});
My first issue is that this function is triggered for all the child at first before waiting for new ones which is unnecessary and could be a problem when I begin to have a lot of data. Is there a way to simply get the latest sample and still get an event when one is added? (I still want to keep the old data to be able to make charts with them)
My second problem is the format of the received data:
{
-Mx2PCeptLf2REP1YFH0: {
"picture_url": "",
"time_stamp": "2022-02-28 21:56:58.502005",
"temperature": 27.71,
"relative_humidity": 42.77,
"eco2": 691,
"tvoc": 198,
"luminosity": 4193,
"vpd": 1.71
}
}
before using the put method, I didn't have the automatically generated ID so I was able to cast this as a Map<String, dynamic> and use the data like this in my widget:
Text(snapshot.data!["temperature"].toString())
but now the added nesting with the generated id is giving me a headache as I can't seem to figure out how to simply get the data.
So if anyone could help me to always get the single latest data when subscribing to a collection and how to access that data within my State of my StatefulWidget that would be much appreciated!
My first issue is that this function is triggered for all the child at first before waiting for new ones which is unnecessary and could be a problem when I begin to have a lot of data.
The Firebase Realtime Database synchronizes the state of the path/query that you listen to. So it doesn't just fire an event on onChildAdded for new data, but also for any existing data that you request. There is (intentionally) no way to change that behavior.
So your options here are limited:
You can remove the data that your application has processed, so that it doesn't get passed to the clients again. That means you essentially implement a message-passing mechanism on top of Firebase's data-synchronization semantics.
You can use a query to only request a certain subset of the child nodes at a path. For example, if you remember the latest key that you've received, you can get data from there on with ref.orderByKey().startAt("-Mx2LastKeyYouAlreadySaw").
Since this comes up regularly, I recommend also checking:
How can I setup Firebase to observe only new data?
How to get only updated data Firebase
How to fetch only latest specific data from Firebase?
and more from these search results
My second problem is the format of the received data:
The screenshot of your database shows two keys, each of which has a single string value. The contents of that value may be JSON, but the way they are stored is just a single string.
If you want to get a single property from the JSON, you either have to:
Decode the string back into JSON with jsonDecode.
Fix the problem at the source by storing the data as proper JSON, rather than as a string.

How do I handle reads and writes of objects containing a server timestamp in Firebase?

I'll explain.
I've been stuck with figuring out how to handle timestamps in Firebase using FieldValue.serverTimestamp().
So let's assume I have an object called question, and I want the object to contain server stamped timestamp. Is how the class should look like (the timestamp part is the only important part):
class Question(
val id: String,
val title: String,
val details: String,
val author_ID: String,
val timestamp: FieldValue,
) {
constructor() : this(
"",
"",
"",
"",
FieldValue.serverTimestamp()
)
}
And then I'll set it like this?
val mQuestion = Question("id", "title", "details", "author", FieldValue.serverTimestamp())
db.collection("questions").document().set(mQuestion)
Is this the correct way to go?
If so, how do I handle the read? Because when the data is being read, the time stamp field would now correspond to a Date type and would cause a crash because Date can't be converted to FieldValue.
Do I need to have two classes for each type of object? One used for reading and one for writing? It doesn't feel right.
I was thinking also maybe I have the timestamp in the class be of type Data and then I upload it empty, and a cloud function would write the date immediately. I feel like this might work but also doesn't feel efficient.
The automatic serialization and deserialization is mostly (from my perspective) a convenience for common read and write operations. I don't see it as a one-size-fits-all mechanism for all reads and writes that could be performed. BOTTOM LINE: If the convenience layer doesn't work for you, then don't use it.
What you're trying to do with FieldValue.serverTimestamp() seems like one of the outlying cases where convenience is not met, since that value has to be determined on the server and not on the client. As it's implemented, the client and server can't agree on a specific type that applies to both reads and writes. If the client wants the server to write a current timestamp, it has to send a token to indicate that, not an actual typed timestamp.
You could certainly implement different types for reading and writing, and that's OK. Or, you can take control of the serialization by passing and parsing Map values, which would probably be more common (and more efficient, as it doesn't involve reflection). In short, I don't think there is an easy way out with the currently strongly typed system.

Firestore Timestamp gets converted to map when processed through cloud function

I am having an issue with the firebase.firestore.Timestamp class.
I am working on an Angular 6 app with firestore. I have been using Timestamps with no issues for a while, but I am in the process of moving some of my client-side services to cloud functions.
The issue is this:
When I perform a write operation directly from the client side, such as this:
const doc = { startTimestamp: firebase.firestore.Timestamp.fromDate(new Date()) };
firebase.firestore().doc('some_collection/uid).set(doc);
The document gets written correctly to firestore as a Timestamp.
However, when I send doc to a cloud function, then perform the write from the function, it gets written as a map, not a timestamp. Similarly, if I use a JS Date() object instead of a firestore.Timestamp, it gets written correctly from the client side but written as a string from the cloud function.
This makes sense given that the documents is just JSON in the request.body, I guess I was just hoping that firestore would be smart enough to implicitly handle the conversion.
For now, I have a workaround that just manually converts the objects into firestore.Timestamps again in the cloud function, but I am hoping that there is a more effective solution, possibly something buried in the SDK that I have not been able to find.
Has anyone else come across and, ideally, found a solution for this?
I can provide more code samples if needed.
The behavior you're observing is expected. The Firestore client libraries have a special interpretation of Timestamp type objects, and they get converted to a Timestamp type field in the database when written. However, if you try to serialize a Timestamp objects as JSON, you will just get an object with the timestamp's milliseconds and nanoseconds components. If you want to send these timestamp components to Cloud Functions or some other piece of software, that's fine, but that other piece of software is going to have to reconstitute a real Timestamp object from those parts before writing to Firestore with the Admin SDK or whatever SDK you're using to deal with Firestore.
in your class model, use the
#ServerTimestamp var timestamp: Date? = null
or
#ServerTimestamp Date timestamp = null
You can leave out initializing timestamp from your code, say new Date()
Example:
#IgnoreExtraProperties
data class ProductItem(
var userId: String? = "",
var avgRating: Double = 0.toDouble(),
#ServerTimestamp var timestamp: Date? = null
)
or
public class ProductItem() {
String userId;
Double avgRating;
#ServerTimestamp Date timestamp;
}

Proper way to handle Query Cursor in Google Firestore? (Vue/Nuxt.js)

I run a chat application with Firebase Firestore and it works all super well, however I'm running into the following issue:
For paginating my conversations & chats I use query cursors for my listeners. I save these query cursors into my state (vuex) so I can access them later when needed. That works and I can paginate my chat messages and conversations.
I create query cursors like so:
const query = await ref.get()
const cursor = query.docs[query.docs.length - 1]
commit('SET_CONVERSATIONS_QUERY_CURSOR', cursor)
And later use them in the next query like so:
.startAfter(state.conversations.queryCursor)
Everything actually works perfect.
My issue now is that that cursor saved in my state seem to be updated regularly (...why Firebase?) & especially directly. This gives me the following error messages when using vuex strict-mode (-> not allowed to access the state directly):
app.js:506 Error: [vuex] Do not mutate vuex store state outside
mutation handlers.
Now I of course want to use strict mode to avoid mutation state directly, but I can't due to the query cursors in my state.
I tried to clone the cursor before saving in to the store, but shallow clones did no good and deep clones did not work because of Converting circular structure to JSON.
So...
Is there a recommended way on how to store query cursors for later use?
Are there options to just store the id of a document and later "recreate" a query cursor?
Thanks!
You can prevent javascript object from getting modified by using Object.freeze(obj).
So in your case it should be const cursor = Object.freeze(query.docs[query.docs.length - 1])

how to update/insert/delete item in akavache list object?

Should I use akavache as a primary local database in my xamarin forms application or a cache database on top of another sqlite database? Because I cant find any example of how to update, insert, delete data in akavache object. For example,
CacheItems= await BlobCache.LocalMachine.GetOrFetchObject<List<Item>>("CacheItems",
async () => await getdatafromAzure(recursive));
I am getting items from azure and store in local machine and these items are editable / deleteable or user can add a new item. How do I do it?
Anything saved to LocalMachine gets persisted physically to the device. So on app or device restart it'll still be there (if the user hasn't removed the app or cleared the data that is)
As far as how to access/save there's lots of good samples here
https://github.com/akavache/Akavache
Insert Object and Get Object are your basic access methods and then there's lots of extension methods like GetOrFetch, GetAndFetch, which are very useful
Here's a quick sample I haven't super tested to give one way to access stuff. It'd probably be better to use some of the extension methods but I figure an example like this is conceptually useful.
BlobCache.LocalMachine.GetObject<Tobject>(someKEy)
.Catch((KeyNotFoundException ke) => Observable.Return<Tobject>(null))
.SelectMany(result =>
{
//object doesn't exist in cache so create a new one
if (result == null)
result = new Tobject();
//apply whatever updates you are wanting to do
//result.SomeField = "bob";
//This will replace or insert data
return BlobCache.LocalMachine.InsertObject(someKEy, result);
})
.Subscribe();
It's really all pretty boring stuff :-p Just get an object and store an object. Under the hood Akavache does a lot of really cool optimizations and synchronizations around that boring stuff though allowing it to be boring for the rest of us
In most of my cases when I start up a VM I retrieve the object from the cache and then just store it as some property on the VM or inside some service class. Then when any changes are made to the object I just insert it into the cache
BlobCache.LocalMachine.InsertObject(someKEy, result).Subscribe()
At that point I know now if the app closes down I'll have the latest version of that object right when user starts up app
The examples I gave are more the full Rx way of accessing... What you have in your original question works fine
await BlobCache.LocalMachine.GetOrFetchObject<List<object>>("CacheItems",
async () => await getdatafromAzure(recursive));
That will basically check the cache if it doesn't exist then it goes to Azure...
LocalMachine stores to physical device, InMemory just stores to some internal dictionary that goes away once the app is unloaded from memory, and UserAccount works with NT remoting accounts.

Resources