I have a chat app powered by Firebase, and I'd like to get a timestamp from Firebase before pushing any data.
Specifically, I'd like to get the time that a user pushes the send button for a voice message. I don't actually push the message to Firebase until the upload was successful (so that the audio file is guaranteed to be there when a recipient receives the message). If I were to simply use Firebase.ServerValue.TIMESTAMP, there could be an ordering issue due to different upload durations. (A very short message following a very long one, for example.)
Is there anyway to ping Firebase for a timestamp that I'm not seeing in the docs? Thank you!
If you want to separate the click, from the actual writing of the data:
var newItemRef = ref.push();
uploadAudioAndThen(audioFile, function(downloadURL) {
newItemRef.set({
url: downloadURL,
savedTimestamp: Firebase.ServerValue.TIMESTAMP
});
});
This does a few things:
it creates a reference for the item item before uploading. This reference will have a push ID based on when the upload started. Nothing is written to the database at this point, but the key of the new location is determined.
it then does the upload and "waits for it" to complete.
in the completion handler of the upload, it writes to the new location it determine in step 1.
it writes the server timestamp at this moment, which is when the upload is finished
So you now have two timestamps. One is when the upload started and is encoded into the key/push id of the new item, the other is when the upload completed and is in the savedTimestamp property.
To get the 3 most recently started uploads that have already completed:
ref.orderByKey().limitToLast(3).on(...
To get the 3 most recently finished uploads:
ref.orderByChild('savedTimestamp').limitToLast(3).on(...
Related
Do event listeners guarantee that all data ever written to a path will be delivered to the client eventually?
For instance if I have a game client that pushes moves to the same path one after the other will the listening client receive all updates?
What would happen in this situation: client A pushes move 1 to game1/user1/move_data which client B is listening on; client A then immediately pushes another move updating the value at game1/user1/move_data.
Will the listening client be guaranteed to receive all moves pushed?
Currently I have a system that creates a new path per move and then I am calling single listeners on each move as each client reaches that move in it's state. It doesn't seem efficient as if the client A receives the most recent move that client B has made then client A begins listening on a path that doesn't exist yet.
The below quotes are from this link: https://firebase.google.com/docs/database/admin/retrieve-data
"The value event is used to read a static snapshot of the contents at a given database path, as they existed at the time of the read event. It is triggered once with the initial data and again every time the data changes. The event callback is passed a snapshot containing all data at that location, including child data. In the code example above, value returned all of the blog posts in your app. Everytime a new blog post is added, the callback function will return all of the posts."
The part about as they existed at the time of the read event causes me to think that if a listener is on a path then the client will receive all values ever on that path eventually.
There is also this line from the guarantees section which I am struggling to decipher:
"Value events are always triggered last and are guaranteed to contain updates from any other events which occurred before that snapshot was taken."
I am working with a language that does not have a Google based sdk and am asking this question, so I can further assess Firebases' suitability for my uses.
Firebase Realtime Database performs state synchronization. If a client is listening to data in a location, it will receive the state of that data. If there are changes in the data, it will receive the latest state of that data.
...if I have a game client that pushes moves to the same path one after the other will the listening client receive all updates?
If there are multiple updates before the Firebase server has a chance to send the state to a listener, it may skip some intermediate values. So there is no guarantee that your client will see every state change, there is just a guarantee that it will eventually see the latest state.
If you want to ensure that all clients (can) see all state changes, you should store the state changes themselves in the database.
try to this code to get update value from firebase database:-
mFirebaseInstance = FirebaseDatabase.getInstance();
mFirebaseDatabase = mFirebaseInstance.getReference();
mFirebaseDatabase.child("new_title").setValue("Realtime Database");
mFirebaseDatabase.child("new_title").addValueEventListener(new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot dataSnapshot) {
String appTitle = dataSnapshot.getValue().toString();
Log.e("Hey", appTitle);
title.setText(appTitle);
}
#Override
public void onCancelled(DatabaseError error) {
// Failed to read value
Log.e("Hey", "Failed to read app title value.", error.toException());
}
});
Do event listeners guarantee that all data ever written to a path will be delivered to the client eventually?
For instance if I have a game client that pushes moves to the same path one after the other will the listening client receive all updates?
What would happen in this situation: client A pushes move 1 to game1/user1/move_data which client B is listening on; client A then immediately pushes another move updating the value at game1/user1/move_data.
Will the listening client be guaranteed to receive all moves pushed?
Currently I have a system that creates a new path per move and then I am calling single listeners on each move as each client reaches that move in it's state. It doesn't seem efficient as if the client A receives the most recent move that client B has made then client A begins listening on a path that doesn't exist yet.
The below quotes are from this link: https://firebase.google.com/docs/database/admin/retrieve-data
"The value event is used to read a static snapshot of the contents at a given database path, as they existed at the time of the read event. It is triggered once with the initial data and again every time the data changes. The event callback is passed a snapshot containing all data at that location, including child data. In the code example above, value returned all of the blog posts in your app. Everytime a new blog post is added, the callback function will return all of the posts."
The part about as they existed at the time of the read event causes me to think that if a listener is on a path then the client will receive all values ever on that path eventually.
There is also this line from the guarantees section which I am struggling to decipher:
"Value events are always triggered last and are guaranteed to contain updates from any other events which occurred before that snapshot was taken."
I am working with a language that does not have a Google based sdk and am asking this question, so I can further assess Firebases' suitability for my uses.
Firebase Realtime Database performs state synchronization. If a client is listening to data in a location, it will receive the state of that data. If there are changes in the data, it will receive the latest state of that data.
...if I have a game client that pushes moves to the same path one after the other will the listening client receive all updates?
If there are multiple updates before the Firebase server has a chance to send the state to a listener, it may skip some intermediate values. So there is no guarantee that your client will see every state change, there is just a guarantee that it will eventually see the latest state.
If you want to ensure that all clients (can) see all state changes, you should store the state changes themselves in the database.
try to this code to get update value from firebase database:-
mFirebaseInstance = FirebaseDatabase.getInstance();
mFirebaseDatabase = mFirebaseInstance.getReference();
mFirebaseDatabase.child("new_title").setValue("Realtime Database");
mFirebaseDatabase.child("new_title").addValueEventListener(new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot dataSnapshot) {
String appTitle = dataSnapshot.getValue().toString();
Log.e("Hey", appTitle);
title.setText(appTitle);
}
#Override
public void onCancelled(DatabaseError error) {
// Failed to read value
Log.e("Hey", "Failed to read app title value.", error.toException());
}
});
Since the Drive SDK v3 we are able to receive push notifications from Google Drive whenever a file has changed. At the moment I'm working on a Drive application in Python and I would like to receive such notifications. Do I really need a web server for this or can I implement this maybe with a socket or something like this?
I know that I can get changes by polling the changes.list method but I want to avoid this because of so many API calls. Is there maybe a better way to get informed if a file has changed?
EDIT: I captured my web traffic and saw, that the original Google Drive Client for Windows uses push notifications. So in some way it must be possible to get push notifications in a desktop application but is this maybe some sort of Google magic which we can't use with the current API
For Google Drive apps that need to keep track of changes to files, the Changes collection provides an efficient way to detect changes to all files, including those that have been shared with a user. The collection works by providing the current state of each file, if and only if the file has changed since a given point in time.
Retrieving changes requires a pageToken to indicate a point in time to fetch changes from.
# Begin with our last saved start token for this user or the
# current token from getStartPageToken()
page_token = saved_start_page_token;
while page_token is not None:
response = drive_service.changes().list(pageToken=page_token,
fields='*',
spaces='drive').execute()
for change in response.get('changes'):
# Process change
print 'Change found for file: %s' % change.get('fileId')
if 'newStartPageToken' in response:
# Last page, save this token for the next polling interval
saved_start_page_token = response.get('newStartPageToken')
page_token = response.get('nextPageToken')
I have clients connecting to the database with javascript.
I also have code running on my server and I'm trying to do a transaction following example as shown here:
https://firebase.google.com/docs/database/server/save-data#section-transactions
Here's a simplified structure of my data
users:
userguid
resource : "room1"
printer : "printer1"
resources
rooms
room1
printers
printer1
counter : 15
The web client would write a request to their own node under "users".
The server is watching for those request and updates the counter for that resource.
If i have the transaction watching for child added I get null for counter so I can't increment the number. If I also watch for child modified the I will get the correct counter value.
I understand from the documentation that the value in transaction can be null but I'm not sure how I can fix my use case to do what I need.
Basically I don't want the client touching the counter, I want the server to read and update that value.
I've gone thru this post
Firebase runTransaction not working
but I'm not clear on how to structure my code to deal with this.
I have a BizTalk receive port monitoring an FTP location. I expect a file to arrive at least once per day in that location and for BizTalk to pick it up and kick off an orchestration. This part is working fine.
However, sometimes the sender fails to send a message during a day, in which case I want an email to sent to notify the users that something is amiss.
I could solve this outside of BizTalk, by creating a daily job that looks in our database for processed files and makes sure there is at least one in any given day. However, I'd prefer to solve this "in line" with the BizTalk solution that is already in place, and not deploy a separate, unrelated job which will increase maintenance headaches.
Is there any functionality in BizTalk that would allow me to send a notification if a receive port doesn't receive something in a given timeframe?
Short answer: Not really.
The logic you want to implement would require a customised version of the FTP Adapter. Depends on how comfortable you are rolling up your sleeves and getting into the Adapter SDK.
If you wanted to keep your solution "Purely BizTalk", you could set up a secondary Orchestration using a SQL Receive Location tied to a stored procedure. This stored procedure executes regularly and looks for records in your "Processed File" table received in the past (business) day. If none are found, it fabricates a record and returns it via the SQL Receive Location. This would be your trigger to send the email notification.
One solution, not elegant though, is to have a secondary FILE receive location, with a schedule window, outside your cutoff time.
Failure scenario:
In this FILE receive location, you have an intelligent/dummy message conforming to the same schema as FTP receive. The intelligent part is to have one of the fields in the message telling us when was the last time we received the file from FTP. The rest of the content is dummy.
Within your orchestration, you check where you received your file from. If its the secondary receive location (using the context property BTS.ReceiveLocationName), you check the date field of this dummy/intelligent message and if it is in past 24 hours ( or similar logic) send an email notifying you did not receive the file from the upstream FTP process and also save a copy of the dummy message (received) back to the secondary FILE receive location unchanged.
Success Scenario:
Apart from normal processing, you save a copy of the dummy/intelligent message to the secondary FILE receive location, with the datetime field reflecting when you processed the file you received from FTP receive location.
Initialising:
You start with a dummy/intelligent message in the secondary FILE receive location with the datetime field value well in the past ( assuming we never received the file from FTP) or with yesterday's date ( assuming we received a file successfully from FTP the day before.)
Overview:
Your orchestration has two trigger points.
When you receive a file via FTP
A scheduled FILE receive location, triggered after the cut-off time.