Firestore Timestamp gets converted to map when processed through cloud function - firebase

I am having an issue with the firebase.firestore.Timestamp class.
I am working on an Angular 6 app with firestore. I have been using Timestamps with no issues for a while, but I am in the process of moving some of my client-side services to cloud functions.
The issue is this:
When I perform a write operation directly from the client side, such as this:
const doc = { startTimestamp: firebase.firestore.Timestamp.fromDate(new Date()) };
firebase.firestore().doc('some_collection/uid).set(doc);
The document gets written correctly to firestore as a Timestamp.
However, when I send doc to a cloud function, then perform the write from the function, it gets written as a map, not a timestamp. Similarly, if I use a JS Date() object instead of a firestore.Timestamp, it gets written correctly from the client side but written as a string from the cloud function.
This makes sense given that the documents is just JSON in the request.body, I guess I was just hoping that firestore would be smart enough to implicitly handle the conversion.
For now, I have a workaround that just manually converts the objects into firestore.Timestamps again in the cloud function, but I am hoping that there is a more effective solution, possibly something buried in the SDK that I have not been able to find.
Has anyone else come across and, ideally, found a solution for this?
I can provide more code samples if needed.

The behavior you're observing is expected. The Firestore client libraries have a special interpretation of Timestamp type objects, and they get converted to a Timestamp type field in the database when written. However, if you try to serialize a Timestamp objects as JSON, you will just get an object with the timestamp's milliseconds and nanoseconds components. If you want to send these timestamp components to Cloud Functions or some other piece of software, that's fine, but that other piece of software is going to have to reconstitute a real Timestamp object from those parts before writing to Firestore with the Admin SDK or whatever SDK you're using to deal with Firestore.

in your class model, use the
#ServerTimestamp var timestamp: Date? = null
or
#ServerTimestamp Date timestamp = null
You can leave out initializing timestamp from your code, say new Date()
Example:
#IgnoreExtraProperties
data class ProductItem(
var userId: String? = "",
var avgRating: Double = 0.toDouble(),
#ServerTimestamp var timestamp: Date? = null
)
or
public class ProductItem() {
String userId;
Double avgRating;
#ServerTimestamp Date timestamp;
}

Related

How to limit size of an array in Firestore on a write?

Does anyone know how to limit an array so new items get pushed in and old ones are discarded in the same write?
I'm guessing this isn't possible but it sure would be handy.
// * Store notification
// Users collection
const usersCollection = db.collection('users').doc(uid).collection('notifications').doc();
// Write this notification to the database as well
await usersCollection.update({
count: admin.firestore.FieldValue.increment,
notifications: admin.firestore.FieldValue.arrayUnion({
'symbol': symbol,
'companyname': companyname,
'change': priceDifference,
'changeDirection': directionOperatorHandler,
'updatedPrice': symbolLatestPrice,
'timestamp': currentTimestamp,
})
});
Written in Typescript
Alternatively, I was thinking of running a scheduled cloud function every week to go through and trim down the arrays based on the timestamp.
The reason I'm using an array to store my notifications is because I'm expecting a lot of writes.
There is no simple configuration for this. Your code should implement your requirements by:
Reading the document
Modifying the array in memory
Checking that the size is within limits
Writing the document back

How do I handle reads and writes of objects containing a server timestamp in Firebase?

I'll explain.
I've been stuck with figuring out how to handle timestamps in Firebase using FieldValue.serverTimestamp().
So let's assume I have an object called question, and I want the object to contain server stamped timestamp. Is how the class should look like (the timestamp part is the only important part):
class Question(
val id: String,
val title: String,
val details: String,
val author_ID: String,
val timestamp: FieldValue,
) {
constructor() : this(
"",
"",
"",
"",
FieldValue.serverTimestamp()
)
}
And then I'll set it like this?
val mQuestion = Question("id", "title", "details", "author", FieldValue.serverTimestamp())
db.collection("questions").document().set(mQuestion)
Is this the correct way to go?
If so, how do I handle the read? Because when the data is being read, the time stamp field would now correspond to a Date type and would cause a crash because Date can't be converted to FieldValue.
Do I need to have two classes for each type of object? One used for reading and one for writing? It doesn't feel right.
I was thinking also maybe I have the timestamp in the class be of type Data and then I upload it empty, and a cloud function would write the date immediately. I feel like this might work but also doesn't feel efficient.
The automatic serialization and deserialization is mostly (from my perspective) a convenience for common read and write operations. I don't see it as a one-size-fits-all mechanism for all reads and writes that could be performed. BOTTOM LINE: If the convenience layer doesn't work for you, then don't use it.
What you're trying to do with FieldValue.serverTimestamp() seems like one of the outlying cases where convenience is not met, since that value has to be determined on the server and not on the client. As it's implemented, the client and server can't agree on a specific type that applies to both reads and writes. If the client wants the server to write a current timestamp, it has to send a token to indicate that, not an actual typed timestamp.
You could certainly implement different types for reading and writing, and that's OK. Or, you can take control of the serialization by passing and parsing Map values, which would probably be more common (and more efficient, as it doesn't involve reflection). In short, I don't think there is an easy way out with the currently strongly typed system.

Querying Firestore for items with timestamp before today

In the new Cloud Firestore you can set a server timestamp on my message object using the following line during creation of the object:
showtime: firebase.firestore.FieldValue.serverTimestamp()
This is useful for messages from any user as it does not rely in each users local clock.
The challenge I am having is in querying and setting the showtime field afterwards. What I want to do is look up any message that has a showtime that is early than 'now'. In my application many messages are pushed into the future. I only want the ones with a showtime before today to be returned and displayed.
Here is my query:
var query = firebase.firestore()
.collection('messages')
.where('owner','==',userID)
.where('showtime','<',timenow)
.orderBy('showtime', 'desc')
.limit(25);
The challenge is that I do not know how to get the current time (on the server) to use in the query. There is a now() call on the type Timestamp in Firebase, but I am unsure how to call it AND I am not sure based on some other questions here whether the Firebase timestamp matches the Cloud Firestore timestamp!
So the question is: How do I set a variable called timenow to be the current time on the server and use it in a query to pull 25 messages before now? (in Javascript on the client and also extra credit for in a function on the server)
A quick follow on question is then how I update the showtime timestamp to be one week later than now?
Hope we have some Firebase / Cloud Firestore mavens on Stackoverflow!
** Choosing an answer below with the caveat of a feature request: A call in Firebase to the server to request a current timestamp so that the client code can work off one standard clock. **
Expanding on the excellent answer by Alex Dunlop, I was in a similar situation and it made sense once I realized that Firestore operates according to two design principles (or at least, that is my analysis)
The emphasis is on simple queries that stream live updates to you. To enable this, queries are by design limited in complexity. This takes load of the Firestore database server and allows it to stream (effectively re-run) queries fast.
Because of the limited query language and few 'server side' operators (FieldValue is one of the few), Google can optimize the queries mercilessly.
There are many more complex operations that developers require. Instead of implementing them in Firestore, developers are asked to spin up a cloud function which makes them responsible (in code and cost) for that additional complexity.
If you come from a conventional DB design like Postgres, the lack of query expressiveness and server-side operations is stunning. If you think of the Firestore use case and the principles, it all makes sense.
Try Firestore security rules with data validation:
match /messages/{msgId} {
// optionally: request.time + someBuffer
allow read: if request.time > resource.data.showtime;
}
I don't think you want to trust the client (since you mentioned you want to block client access to future showtimes). Besides changing their clock, they could just edit the javascript.
A buffer might be necessary to not invalidate the query if there is some discrepancy between client provided Date.now() and Firestore rules request.time, sepecifically if the request.time happens to be earlier than client date, then the query would have documents falling outside valid range and fail.
Checking the read timestamp on an empty snapshot seems to work:
const serverTime = (await db.collection('nonexistent').get()).readTime;
Agreed with one of the other comments that this extra call could be expensive, but this should work in cases where an accurate "not before" server time is important for correctness.
You are right serverTimestamp() is exactly for getting a timestamp on the server and not relying on the users local clock. One thing to note is that generally sending a message and getting the timestamp from a users local clock is going to be okay as a message timestamp is not extremely time sensitive. Sure you would like to know when the message is sent but if it is within 1-2 seconds not a problem in most cases.
If you are doing a query on the client side your query should not be based on the server time it should be based on the client side. As it is a client query not a server query.
Here is a client query that you are after.
const currentTime = new Date();
const query = firebase.firestore()
.collection('messages')
.where('owner','==',userID)
.where('showtime','<',currentTime)
.orderBy('showtime', 'desc')
.limit(25);
This query will get 25 messages with a 'showtime' after the current time on the client.
Now if the messages need to be extremely time sensitive and you do absolutely need the messages to based off the server timestamp I recommend that instead of doing a query on the client like above you set up a cloud function api.
Have a look at the firebase docs for calling cloud functions directly if you haven't before.
Here is what you would want your cloud function to look like:
import * as functions from 'firebase-functions';
import * as admin from 'firebase-admin';
admin.initializeApp()
exports.getUserMessages = functions.https.onCall((data, context) => {
const uid = context.auth.uid;
const firestore = admin.firestore();
firestore.collection('messages')
.where('owner', '==', uid)
.where('showtime', '<', new Date())
.orderBy('showtime', 'desc')
.limit(25)
.get()
.then(snapshot => {
return snapshot;
});
});
This will get the messages based of the server timestamp. One thing to note is UNLESS you need this to be extremely time sensitive this is not a good idea. As this call has to do an extra unnecessary call every time you call it. Because you are doing a call to the cloud function and then you are doing a query to the firestore database.
I would recommend that instead of doing it based on the server time you do it based on client timestamp. 99 times out of 100 the time difference between client and server is not worth the extra double calls you are doing, especially when you think about scaling everything up when you get more users.
Hope that answered your question :)

DateTime objects in MeteorJS and Redux

What is best practice to deal with DateTime objects in MeteorJS with Redux?
Lately implemented the usage of Redux in my React Native + Meteor app for offline functionality following the blog post of Spencer Carli. But I have some problems with the way both systems store DateTime objects. I must admit that I have no detailed understanding of how date objects are handled in JavaScript.
Meteor
My Meteor collection model contains a datetype object (I use simpl-schema):
const Objects= new Mongo.Collection('objects');
Objects.schema = new SimpleSchema({
startedAt: Date,
});
Date presentation
In the react-native app I have to parse the date type to a string:
<Text>Started at: {object.startedAt.toUTCString()}</Text>
However, in disconnected mode the date is of type string ā€œ2017-02-11T09:00:00.000Zā€ so this parse fails
Insert items with a Date object
The insert of items in react-native:
Meteor.collection('objects').insert({
startedAt: new Date(),
}
);
This is accepted in disconnected mode, but when connection with the server is restored, insertion of items in the MongoDB is rejected.
Though not a great answer my primary suggestion, whenever using dates in Javascript, is to use the momentjs. It will save you a ton of time.
Otherwise, and I'm not sure if this is the "right" approach, I would do
<Text>Started at: {new Date(object.startedAt).toUTCString()}</Text>
that way the startedAt value will always be converted to a Date object, regardless of it it's already one or if it's a string.
Hope that helps!

Does AngularFire2's database.list hold a reference or actually grab data?

I'm following along the with the basic AngularFire2 docs, and the general format seems to be:
const items = af.database.list('/items');
// to get a key, check the Example app below
items.update('key-of-some-data', { size: newSize });
My confusion is that in the source code, it seems as though calling database.list() grabs all the data at the listed url (line 114 here)
Can anyone help clarify how that works? If it does indeed grab all the data, is there a better way of getting a reference without doing that? Or should I just reference each particular URL individually?
Thanks!
When you create an AngularFire2 list, it holds an internal Firebase ref - accessible via the list's public $ref property.
The list is an Observable - which serves as the interface for reading from the database - and includes some additional methods for writing to the database: push, update and remove.
In the code in your question, you are only calling the update method and are not subscribing to the observable, so no data is loaded from the database into memory:
const items = af.database.list('/items');
// to get a key, check the Example app below
items.update('key-of-some-data', { size: newSize });
It's only when a subscription to the observable is made that listeners for value and the child_... events are added to the ref and the list builds and maintains an internal array that's emitted via the observable. So if you are only calling the methods that write to the database, it won't be loading any data.
The AngularFire2 object is implemented in a similar manner.

Resources