Firebase Cloud function: Weird timestamp bug - firebase

So I have implemented Stories in my Flutter+Firebase app, and wrote a Cloud Function in JS to delete all stories older than 24h. I added a test Story in the Database, with a field 'timestamp' and set it to August 25, 8:00. Now when I am running the function, I print out the document id and the timestamp of that found document. However, the dates that get printed out are all in 1973!?
Here is my function:
// Start delete old Stories
const runtimeOpts = {
timeoutSeconds: 300,
memory: '512MB'
}
const MAX_CONCURRENT = 3;
const PromisePool = promisePool.PromisePool;
exports.storiesCleanup = functions.runWith(runtimeOpts).pubsub.schedule('every 1 minutes').onRun(
async context => {
console.log('Cleaning stories...');
await getOldStories();
//const promisePool = new PromisePool(() => deleteOldStories(oldStories), MAX_CONCURRENT);
//await promisePool.start();
console.log("finished cleaning stories");
}
);
async function getOldStories() {
const yesterday = Date.now() - 24*60*60*1000;
console.log("Yesterday: " + yesterday);
var storyPostsRef = admin.firestore().collection("StoryPosts");
var query = storyPostsRef.where("timestamp", "<", yesterday);
storyPostsRef.get().then(
querySnapshot => {
querySnapshot.forEach(
(doc) => {
// HERE I AM PRINTING OUT!
console.log("Found document: " + doc.id + ", which was created on: " + doc.data().timestamp);
//doc.ref.delete();
}
)
return null;
}
).catch(error => {throw error;});
}
//End delete old stories
Here is a picture of a document and its timestamp:
And this is a picture of the id and timestamp printed out for that document:
Edit: So for printing out, I figured that if I print doc.data().timestamp.seconds I get the correct number of seconds since epoch. But I still don't understand what the number 06373393200.000000 (printed in the picture above) is. And how do I then make a query when I want to get all stories where the timestamp is small than today-24h ?
var query = storyPostsRef.where("timestamp", "<", Timesstamp.fromDate(yesterday)); does not work.

If you come to the conclusion that the printed timestamp is from a year other than the one shown in the console, then you are misinterpreting the output. Firestore timestamps are represented with two values - an offset in seconds since the unix epoch, and another offset in nanoseconds from that time. This much is evident from the API documentation for Timestamp.
You're probably taking the seconds offset value and interpreting as an offset in milliseconds, which is common in other timestamp systems. You can see how this would cause problems. If you want to take that a Firestore offset in seconds and use a tool to interpret it in a system that uses milliseconds, you will need to first multiply that value by 1,000,000 to convert seconds to milliseconds.

Related

Need to convert my Time string to a timestamp to update current timestamp field in firestore

I am trying to convert my Time string that displays in my app as 11:00. I need to convert to a timestamp so I can replace the current Time field in my firestore which is a timestamp.
I have tried using moment.js to update the field but it changes the data type to a string.
Current value in my firestore is shown below
submitJob = () => {
const { navigation } = this.props;
const customer_jobnumber = navigation.getParam('JobNumber', 'No Job Number');
const customer_starttime = navigation.getParam('datetime', 'No Job Start Time');
const customer_end = navigation.getParam('datetimefinish', 'No Job Finish Time');
firebase.firestore().collection("jobs").doc(customer_jobnumber).update({ endtime: customer_end, starttime: customer_starttime, value: firebase.firestore.FieldValue.serverTimestamp() });}
The desired value would be - October 4, 2020 at 11:00:00 AM UTC+1
Found the solution with this, if anyone comes across the same problem.
let tx = customer_starttime.split(":")
let dx = new Date().setHours(parseInt(tx[0]),parseInt(tx[1]),0)
let dl = new Date(dx)

Is there a better way to write this firebase cloud function than what I have right now?

I have been trying to learn firebase cloud functions recently and I have wrote an http that takes the itemName, sellerUid, and quantity. Then I have a background trigger (an onWrite) that finds the Item Price with the provided sellerUid and itemName and computes the total (Item Price * Quantity) and then writes it into a document in firestore.
My question is:
with what I have right now, suppose my client purchases N items, this means that I will have:
N reads (from the N items' price searching),
2 writes (one initial write for the N items and 1 for the Total Amount after computation),
N number of searches from cloud function??
I am not exactly sure how cloud functions count towards read and writes as well as the amount of compute time it needs (though it's all just text though so should be negligible?)
Would love to hear your thoughts on if what I have is already good enough or is there a much more efficient way of going about this.
Thanks!
exports.itemAdded = functions.firestore.document('CurrentOrders/{documentId}').onWrite(async (change, context) => {
const snapshot = change.after.data();
var total = 0;
for (const [key, value] of Object.entries(snapshot)) {
if (value['Item Name'] != undefined) {
await admin.firestore().collection('Items')
.doc(key).get().then((dataValue) => {
const itemData = dataValue.data();
if (!dataValue.exists) {
console.log('This is empty');
} else {
total += (parseFloat(value['Item Quantity']) * parseFloat(itemData[value['Item Name']]['Item Price']));
}
});
console.log('This is in total: ', total);
}
}
snapshot['Total'] = total;
console.log('This is snapshot afterwards: ', snapshot);
return change.after.ref.set(snapshot);
});
With your current approach you will be billed with:
N reads (from the N items' price searching);
1 write that triggers your onWrite function;
1 write that persists the total value;
One better approach that I can think of is one of comparing the size of the list of values in change.before.data() and change.after.data(), and reading the current total value (0 if this is the first time) and afterwards add only the values that were added in change.after.data() instead of N values, which would potentially result in you being charged for less reads.
For the actual pricing, if you check this Documentation for Cloud Functions, you will see that on your case only invocation and compute billing applies to your case, however there is a free tier for both, so if you are using this only to learn and this app does not have a lot of use, you should be on the free tier with either approach.
Let me know if you need any more information.

Query firebase data by timestamp month

So i have my records in firebase cloud firestore like this :
[
{
"category":"drinks",
"expensetype":"card",
"id":"5c673c4d-0a7f-9bb4-75e6-e71c47aa8d1d",
"name":"JJ",
"note":"YY",
"price":57,
"timestamp":"2017-11-30T22:44:43+05:30"
},
{
"category":"drinks",
"expensetype":"card",
"id":"85731878-eaed-2740-6802-e35e7758270b",
"name":"VV",
"note":"TTT",
"price":40,
"timestamp":"2017-12-30T22:41:13+05:30"
}
]
I want to query data with the timestamp and get data that belongs to a particular month or date.
For example if I click a particular month (say March) in my calendar, I want only the data of that particular month in my results. How can i do that?
I have saved the date using
firebase.firestore.FieldValue.serverTimestamp()
Thanks in advance for any help.
Edit :
This is how I form my query
var spendsRef = this.db.collection('spend', ref => ref.where("timestamp", ">", "12-12-2017"));
this.items = spendsRef.valueChanges();
But it still returns me all the data in Database
Answer :
var todayDate = new Date("Tue Jan 02 2018 00:00:00 GMT+0530 (IST)") // Any date in string
var spendsRef = this.db.collection('spend', ref => ref.where("timestamp", ">", todayDate));
this.items = spendsRef.valueChanges();
Using two relative operators, you can determine the precise range of documents returned by the query:
ref.where("timestamp", ">=", "2017-11").where("timestamp", "<", "2017-12")
This is the android way
Query query = mFirestore.collection("rootcollection").whereEqualTo("month", 3);
you can further sort the result by timestamp
Query query = mFirestore.collection("rootcollection")
.orderBy("timestamp", Query.Direction.DESCENDING)
.whereEqualTo("month", 3);
I am working with node.js and storing one parameter as new Date(new Date().toUTCString()).
Now, I fetch the same data, do some processing over it and use the parameter as new Date(<param received in fetch>) in where query to get the same record amongst many records.
Thank you #Syed Sehu Mujammil A, your answer worked.
I solved a similar problem by:
var now = new Date(new Date().toUTCString());
const db = collection(fireStore, 'jobs_col');
return await getDocs(query(db, where('expire_date', '>=', now)));

DocumentDB Change Feed and saving Checkpoint

After reading the documentation, I'm having a hard time conceptualizing the change feed. Let's take the code from the documentation below. The second change feed is picking up the changes from the last time it was run via the checkpoints. Let's say it is being used to create summary data and there was an issue and it needed to be re-run from a prior time. I don't understand the following:
How to specify a particular time the checkpoint should start. I understand I can save the checkpoint dictionary and use that for each run, but how do you get the changes from X time to maybe rerun some summary data
Secondly, let's say we are rerunning some summary data and we save the last checkpoint used for each summarized data so we know where that one left off. How does one know that a record is in or before that checkpoint?
Code that runs from collection beginning and then from last checkpoint:
Dictionary < string, string > checkpoints = await GetChanges(client, collection, new Dictionary < string, string > ());
await client.CreateDocumentAsync(collection, new DeviceReading {
DeviceId = "xsensr-201", MetricType = "Temperature", Unit = "Celsius", MetricValue = 1000
});
await client.CreateDocumentAsync(collection, new DeviceReading {
DeviceId = "xsensr-212", MetricType = "Pressure", Unit = "psi", MetricValue = 1000
});
// Returns only the two documents created above.
checkpoints = await GetChanges(client, collection, checkpoints);
//
private async Task < Dictionary < string, string >> GetChanges(
DocumentClient client,
string collection,
Dictionary < string, string > checkpoints) {
List < PartitionKeyRange > partitionKeyRanges = new List < PartitionKeyRange > ();
FeedResponse < PartitionKeyRange > pkRangesResponse;
do {
pkRangesResponse = await client.ReadPartitionKeyRangeFeedAsync(collection);
partitionKeyRanges.AddRange(pkRangesResponse);
}
while (pkRangesResponse.ResponseContinuation != null);
foreach(PartitionKeyRange pkRange in partitionKeyRanges) {
string continuation = null;
checkpoints.TryGetValue(pkRange.Id, out continuation);
IDocumentQuery < Document > query = client.CreateDocumentChangeFeedQuery(
collection,
new ChangeFeedOptions {
PartitionKeyRangeId = pkRange.Id,
StartFromBeginning = true,
RequestContinuation = continuation,
MaxItemCount = 1
});
while (query.HasMoreResults) {
FeedResponse < DeviceReading > readChangesResponse = query.ExecuteNextAsync < DeviceReading > ().Result;
foreach(DeviceReading changedDocument in readChangesResponse) {
Console.WriteLine(changedDocument.Id);
}
checkpoints[pkRange.Id] = readChangesResponse.ResponseContinuation;
}
}
return checkpoints;
}
DocumentDB supports check-pointing only by the logical timestamp returned by the server. If you would like to retrieve all changes from X minutes ago, you would have to "remember" the logical timestamp corresponding to the clock time (ETag returned for the collection in the REST API, ResponseContinuation in the SDK), then use that to retrieve changes.
Change feed uses logical time in place of clock time because it can be different across various servers/partitions. If you would like to see change feed support based on clock time (with some caveats on skew), please propose/upvote at https://feedback.azure.com/forums/263030-documentdb/.
To save the last checkpoint per partition key/document, you can just save the corresponding version of the batch in which it was last seen (ETag returned for the collection in the REST API, ResponseContinuation in the SDK), like Fred suggested in his answer.
How to specify a particular time the checkpoint should start.
You could try to provide a logical version/ETag (such as 95488) instead of providing a null value as RequestContinuation property of ChangeFeedOptions.

SQLite storage API Insert statement freezes entire firefox in bootstrapped(Restartless) AddOn

Data to be inserted has just two TEXT columns whose individual length don't even exceed 256.
I initially used executeSimpleSQL since I didn't need to get any results.
It worked for simulataneous inserts of upto 20K smoothly i.e. in the bakground no lag or freezing observed.
However, with 0.1 million I could see horrible freezing during insertion.
So, I tried these two,
Insert in chunks of 500 records - This didn't work well since even for 20K records it showed visible freezing. I didn't even try with 0.1million.
So, I decided to go async and used executeAsync alongwith Bind etc. This also shows visible freezing for just 20K records. This was the whole array being inserted and not in chunks.
var dirs = Cc["#mozilla.org/file/directory_service;1"].
getService(Ci.nsIProperties);
var dbFile = dirs.get("ProfD", Ci.nsIFile);
var dbService = Cc["#mozilla.org/storage/service;1"].
getService(Ci.mozIStorageService);
dbFile.append('mydatabase.sqlite');
var connectDB = dbService.openDatabase(dbFile);
let insertStatement = connectDB.createStatement('INSERT INTO my_table
(my_col_a,my_col_b) VALUES
(:myColumnA,:myColumnB)');
var arraybind = insertStatement.newBindingParamsArray();
for (let i = 0; i < my_data_array.length; i++) {
let params = arraybind.newBindingParams();
// Individual elements of array have csv
my_data_arrayTC = my_data_array[i].split(',');
params.bindByName("myColumnA", my_data_arrayTC[0]);
params.bindByName("myColumnA", my_data_arrayTC[1]);
arraybind.addParams(params);
}
insertStatement.bindParameters(arraybind);
insertStatement.executeAsync({
handleResult: function(aResult) {
console.log('Results are out');
},
handleError: function(aError) {
console.log("Error: " + aError.message);
},
handleCompletion: function(aReason) {
if (aReason != Components.interfaces.mozIStorageStatementCallback.REASON_FINISHED)
console.log("Query canceled or aborted!");
console.log('We are done inserting');
}
});
connectDB.asyncClose(function() {
console.log('[INFO][Write Database] Async - plus domain data');
});
Also, I seem to get the async callbacks after a long time. Usually, executeSimpleSQL is way faster than this.If I use SQLite Manager Tool extension to open the DB immediately this is what I get ( as expected )
SQLiteManager: Error in opening file mydatabase.sqlite - either the file is encrypted or corrupt
Exception Name: NS_ERROR_STORAGE_BUSY
Exception Message: Component returned failure code: 0x80630001 (NS_ERROR_STORAGE_BUSY) [mozIStorageService.openUnsharedDatabase]
My primary objective was to dump data as big as 0.1 million + and then later on perform reads when needed.

Resources