Firebase Timestamp (serverValue) in seconds? - firebase

1. The problem
I'm trying to write some values organizing them by the timestamp value (in seconds, not milliseconds) they arrive to the database in order to trigger a function in the server every time data arrives.
I expect to register this every second because milliseconds will trigger too many times and I don't need it that often.
E.g.
|- timestampSecond1: true
|- timestampSecond2: true
|- ...
|- timestampSecondX: true
I'm using Firebase Realtime Database
2. What I've tried
2.1. ServerValue Timestamp:
Using this approach writes the timestamp in milliseconds. Since ServerValue Timestamp is a placeholder that gets replaced directly by the server the moment it arrives, I cannot modify it before it gets replaced or when I sent the post request to the server in order to convert it to seconds.
https://firebase.google.com/docs/reference/android/com/google/firebase/database/ServerValue
2.2. Cloud Functions: Phased server function to evaluate and calculate the range to the millisecond written by ServerValue and then writing in another branch the value in seconds.
2.3. Cloud Functions: Function triggered by the arrival of data and writing the data in seconds directly. Calculating internally the equivalence in seconds of the timestamp.
I'm aware 2.2. and 2.3. will work for what I'm trying to do, but I'm wondering if there is another way to do this without envolving server functions.
3. Help
Are there any workarounds to mark the timestamp in seconds using ServerValue or other methods that ensure the timestamp written is the same data arrives to the server?
Thank you.

Related

Firebase cloud functions dynamic time zones

So in my android app, I am using the real-time database to store information about my users. That information should be updated every Monday at 00:00 o'clock. I am using a cloud function to do this but the problem here is the time zones. Right now I have set the time zone to 'Europe/Sofia' for testing purposes. In the documentation, it is said that the time zone for cloud functions must be from the TZ database. So I figured I could ask the user before registering in my app their preferred time zone and save it in the database. My question is after getting the user's prefered time zone is there a way to only write one cloud function and execute it dynamically for each time zone in the TZ database or do I have to create individual functions for each time zone in the TZ database?
If I correctly understand your question, you could have a scheduled Cloud Function which runs every hour from 00:00 to 23:00 UTC+14:00 on Mondays, and, for every execution (i.e. for every hour within this range), query for the users that should be updated and execute the updates.
I'm not able to enter more into details, based on the info you have provided.
It's not possible to schedule a Cloud Function using a dynamic timezone. You must know the timezone at the time you write the function and declare it statically in your code.
If you want to schedule something dynamically, read through your options in this other question: https://stackoverflow.com/a/42796988/807126
So, you could schedule a repeating function that runs every hour, and check to see if something should be run for a user at the moment in time that it was invoked. Or, you can schedule a single future invocation of a function with a service like Cloud Run, and keep rescheduling it if needed.

Firebase server time stamp same as local (almost)

Is firebase generated server time stamp automatically converted into local time as I am getting the time stamp same as my local time or am I missing something?
_firestore.collection("9213903123").document().setData(
{
"title": title,
"message": message,
"deviceTimeStamp": DateTime.now(),
"serverTimeStamp": FieldValue.serverTimestamp(),
},
);
after running the above statement I can see both devicetimestamp, and serverTimeStamp are same.
But their is slight delay in their seocons.
Data In firestore...
deviceTimeStamp
8 August 2020 at 16:39:08 UTC+5:30
(timestamp)
message
"26"
serverTimeStamp
8 August 2020 at 16:39:16 UTC+5:30
title
"26"
The thing I am trying to do is basically ordering on the basis of date time and the user can see when he has created the notes, but if someone creates a note and stores it into firestore anywhere from the world(irrespective of location). Will, he/she gets their local time by using server timestamp(as I want this so can user see when they have added their document). ANd for safety, I want to use server time stamp so if a device is not in sync with the current time .
Firestore's FieldValue.serverTimestamp() creates and stores the timestamp in UTC Epoch time at the moment the request reaches the Firestore server.
Calling serverTimestamp() doesn't create a timestamp at the time it is invoked and doesn't rely on your user's device time or timezone. Instead, it creates what could be thought of as a placeholder for the date/time. This placeholder is only converted to a timestamp once the request is received by the Firebase server. As a result, the Firestore timestamp will always be at least slightly later than your client date/time (assuming the client time is perfectly synced.)
Using Firestore's timestamp should generally meet your goals of storing dates/times for users across the world in a consistent way, retrieving / sorting notes by creation time, and converting the timestamps into the user's local timezone client side. You may, however, want to handle things slightly differently for scenarios when your users are offline.
See Doug Stevenson's article for a helpful, more detailed explanation of Firestore timestamps.

Google Cloud Scheduler Run at Set Times Every Minute

I am trying to call an api every minute for ski lift status and check for changes. I am going to store the value of if the lift is open or closed in firebase (Real Time Database) and read to see if value from api is different and only update/ write to that node when it's a different value. Then I can set up a cloud function that will listen for database changes and send push notifications to the list of FCM tokens from that channel. I am not sure if this is the most efficient way, but I was going to set up scheduled functions to call the third party api.
I have been using these docs:
https://firebase.google.com/docs/functions/schedule-functions
I was planning to do something like this:
exports.scheduledFunction = functions.pubsub.schedule('every 5 minutes').onRun((context) => {
CALL MY API IN HERE AND UPDATE DATABASE IF SNAPSHOT BACK IS DIFFERENT
});
I was wondering how would I run only between set times- say 8am-6pm EST. I am struggling to find anything about times to run. Should I just run the function every minute and then pause and resume by checking the time? In which case how does it know to keep checking the time when it is paused?
Firebase scheduled functions use Cloud Scheduler to implement the schedule. It accepts cron style time specifiers to indicate when a job should be run. The full spec for that can be found here. You will have to use ranges of numbers to indicate the valid times and frequency of the schedule. For example, you might use "8-18" in the hour field to limit the hours of execution.

Make scheduler run in only one instance of multiple micro-service

I have built a micro-service where there is an API called deleteToken. This API(when invoked) is supposed to change the status in a tuple in db corresponding to token (identified with token id) to "MARK-DELETE". Once that tuple has status "MARK_DELETE" then after 30 days there should be a rest call made to downstream service API called deleteTokenFromPartner. There is no such mandate like call to deleteTokenFromPartner has to be made right after 30 days, it can be done few hours later 30 days also. So what I thought was I will write a scheduler (using Quartz, Java Executor service) with scheduled period in such a way that it will run once everyday. what it will do is it will query db and find out all rows which has status="MARK_DELETE" and status update is older than 30 days. After then it will iteratively call deleteTokenFromPartner for each and every row. There is one db which is highly available and we may not have any issue with consistency as we delete after 30 days. But the problem I am seeing is, as this is a micro-service which has N instances so every instance will query db, get the same set of rows and make call to same rows. Can I make any tweak so that this duplicated calls can be avoided. FYI we don't make any config changes using hostnames and if only one instance will be capable of running the scheduler that too will be fine.

BizTalk 2009 - How to run a process after all messages have processed from a large disassembled file

We receive many large data files daily in a variety of formats (i.e. CSV, Excel, XML, etc.). In order to process these large files we transform the incoming data into one of our standard 'collection' message classes (using XSLT and a pipeline component - either built-in or custom), disassemble the large transformed message into individual 'object' messages and then call a series of SOAP web service methods to handle business logic and database operations.
Unlike other files received, the latest file will contain all data rows each day and therefore, we have to handle the differences to prevent identical records from being re-processed each day.
I have a suitable mechanism for handling inserts and updates but am currently struggling with the deletes (where the record exists in the database but not in the latest file).
My current thought process is to flag the deleted records in the database using a 'cleanup' task at the end of the entire process but this would require a method to be called once all 'object' messages from the disassembled file have completed.
Is it possible to monitor individual messages from a multi-record file and call a method on completion of the whole file? Currently, all research is pointing to an orchestration with some sort of 'wait' but is this the only option?
Example: File contains 100 vehicle records. This is disassembled into 100 individual XML messages which are processed using 100 calls to a web service method. Wish to call cleanup operation when all 100 messages are complete.
The best way I've found to handle the 'all rows every day' scenario is to pre-stage the data in SQL Server where it's easier to compare the 'current' set to the 'previous' set. The INTERSECT and EXCEPT operators make it pretty easy in most cases.
Then drain the records with a Polling statement.
The component that does the de-batching would need to publish a start of batch message with the number of individual records and a correlation key.
The components that do the insert & update would need to publish a completion message with the same correlation key when it is completed processing.
The start of batch message would have spun up an Orchestration that would would listen for the completion messages with that correlation key and count the number, and either after it has received the correct number or after a timeout period it would call the cleanup or raise an exception.

Resources