Show file upload speed - firebase

Is it possible to show for user upload speed?
I'm uploading file (video) to Firebase. File is so large (approx. 2GB).
Can I show for user currently upload speed for that file?

It's pretty similar to how you are showing the progress. If you are showing the progress that means you can access the bytesTransferred property. now you just need to calculate the speed by using that property. There could be a better or more optimized way to do this but this is how I do it.
int lastBytesTransferred = 0;
DateTime? lastBytesTransferredTime;
// to make sure the speed only updates once in an interval and not every millisecond or so
Duration updateDurationThreshold = const Duration(seconds: 2);
double speed = 0;
UploadTask task =
FirebaseStorage.instance.ref().child("files/$fileName").putData(file!);
task.snapshotEvents.listen((event) {
if (lastBytesTransferredTime == null) {
lastBytesTransferred = event.bytesTransferred;
lastBytesTransferredTime = DateTime.now();
speed = 0;
} else {
final now = DateTime.now();
final Duration duration = now.difference(lastBytesTransferredTime!);
if (duration > updateDurationThreshold) {
final double bytesPerSecond =
(event.bytesTransferred - lastBytesTransferred) /
duration.inSeconds;
lastBytesTransferred = event.bytesTransferred;
lastBytesTransferredTime = now;
setState(() {
speed = bytesPerSecond / 1024; // in kbps
});
}
}
});

The Firebase SDKs report progress for file uploads to Cloud Storage, which you can use to calculate the upload speed. In Flutter you can listen for upload events, as shown in the second code sample in the documentation on handling upload tasks. From there:
for larger files it may be a better user experience if a progress indicator can be shown. Tasks can also provide a stream of events, which emits a TaskSnapshot each time a notable event occurs (e.g. upload progress). The snapshot provides information about the state of the task along with the amount of bytes that have been processed:
Future<void> handleTaskExample2(String filePath) async {
File largeFile = File(filePath);
firebase_storage.UploadTask task = firebase_storage.FirebaseStorage.instance
.ref('uploads/hello-world.txt')
.putFile(largeFile);
task.snapshotEvents.listen((firebase_storage.TaskSnapshot snapshot) {
print('Task state: ${snapshot.state}');
print(
'Progress: ${(snapshot.bytesTransferred / snapshot.totalBytes) * 100} %');
}, onError: (e) {
// The final snapshot is also available on the task via `.snapshot`,
// this can include 2 additional states, `TaskState.error` & `TaskState.canceled`
print(task.snapshot);
if (e.code == 'permission-denied') {
print('User does not have permission to upload to this reference.');
}
});
// We can still optionally use the Future alongside the stream.
try {
await task;
print('Upload complete.');
} on firebase_core.FirebaseException catch (e) {
if (e.code == 'permission-denied') {
print('User does not have permission to upload to this reference.');
}
// ...
}
}

Related

Blazor WASM(Hosted) with PWA : How to change the current service worker code to use Network First Strategy?

The Current code looks like does Cache first Strategy, How to modify it use Network first and than fallback to cache if network fails ?
async function onFetch(event) {
let cachedResponse = null;
if (event.request.method === 'GET') {
// For all navigation requests, try to serve index.html from cache
// If you need some URLs to be server-rendered, edit the following check to exclude those URLs
//const shouldServeIndexHtml = event.request.mode === 'navigate';
console.log("onFetch : " + event.request.url.toLowerCase());
const shouldServeIndexHtml = event.request.mode === 'navigate';
const request = shouldServeIndexHtml ? 'index.html' : event.request;
const cache = await caches.open(cacheName);
cachedResponse = await cache.match(request);
}
return cachedResponse || fetch(event.request);
}
if (event.request.url.indexOf('/api') != -1) {
try {
// Network first
var response = await fetch(event.request);
// Update or add cache
await cache.put(event.request, response.clone());
// Change return value
cachedResponse = response;
}
catch (e)
{
}
}
You can add something like this after:
cachedResponse = await cache.match(request);
This should always load api requests from the network first, since it's not part of the cache initially. Every time the cache is renewed for this request. If the request fails, the cached value will be used.

Problem with saving data in Transaction in Firebase Firestore for Flutter

I have a problem with transactions in my web application created in Flutter. For database I use Firebase Firestore where I save documents via transaction.
Dependency:
cloud_firestore: 3.1.1
StudentGroup is my main document. It has 4 stages and each of them has 3-5 tasks. (Everything is in 1 document). I have to store game timer, so every 10 seconds I make an request to save time for current stage. (Every stage has different timer). I have a problem with saving task, because "Sometimes" when 2 requests are made in the same time, then I get some weird state manipulation.
Task is updated and "isFinished" is set to true
Timer is updated to correct value (with this update somehow previous task update is lost, "isFinished" is set to false
This is how I save task.
Future<Result> saveTask({required String sessionId, required String studentGroupId,
required Task task}) async {
print("trying to save task <$task>.");
try {
return await _firebaseFirestore.runTransaction((transaction) async {
final studentGroupRef = _getStudentGroupDocumentReference(
sessionId: sessionId,
studentGroupId: studentGroupId
);
final sessionGroupDoc = await studentGroupRef.get();
if (!sessionGroupDoc.exists) {
return Result.error("student group not exists");
}
final sessionGroup = StudentGroup.fromSnapshot(sessionGroupDoc);
sessionGroup.game.saveTask(task);
transaction.set(studentGroupRef, sessionGroup.toJson());
})
.then((value) => taskFunction(true))
.catchError((error) => taskFunction(false));
} catch (error) {
return Result.error("Error couldn't save task");
}
}
This is how I save my time
Future<Result> updateTaskTimer({required String sessionId,
required String studentGroupId, required Duration duration}) async {
print("trying to update first task timer");
try {
return await _firebaseFirestore.runTransaction((transaction) async {
final studentGroupRef = _getStudentGroupDocumentReference(
sessionId: sessionId,
studentGroupId: studentGroupId
);
final sessionGroupDoc = await studentGroupRef.get();
if (!sessionGroupDoc.exists) {
return Result.error("student group not exists");
}
final sessionGroup = StudentGroup.fromSnapshot(sessionGroupDoc);
switch (sessionGroup.game.gameStage) {
case GameStage.First:
sessionGroup.game.stages.first.duration = duration.inSeconds;
break;
case GameStage.Second:
sessionGroup.game.stages[1].duration = duration.inSeconds;
break;
case GameStage.Third:
sessionGroup.game.stages[2].duration = duration.inSeconds;
break;
case GameStage.Fourth:
sessionGroup.game.stages[3].duration = duration.inSeconds;
break;
case GameStage.Fifth:
sessionGroup.game.stages[4].duration = duration.inSeconds;
break;
}
transaction.set(
studentGroupRef,
sessionGroup.toJson(),
SetOptions(merge: true)
);
print("Did I finish task 4? ${sessionGroup.game.stages.first.tasks[3].isFinished}");
})
.then((value) => timerFunction(true))
.catchError((error) => timerFunction(false));
} catch (error) {
return Result.error("Error couldn't update task timer");
}
}
timerFunction and taskFunction print some messages in console and return Result.error or Result.success (for now it returns bool)
I don't know If I am doing something wrong with Firebase Firestore Transaction. I would like to have atomic operations for reading and writing data.
Transactions ensure atomicity - which means that if the transaction succeeds then all the reads and writes occur in a non-overlapping way with other transactions. This prevents the type of problem you are describing.
But this doesn't work if you spread your reads and writes over multiple transactions. In particular, it looks to me like you are writing a task which was obtained from outside the transaction. Instead, you should use ids or similar to track which documents you need to update, then do a read and a write inside the transaction.Alternatively firebase also provides Batched Writes, which specify the specific properties you want to update. These will ensure that any other properties are not changed.For batch writes example you can refer to the link

Firestore query "onSnapshot" called at the same time does not work (

I created an app with Ionic and Firestore that features live chat and I'm having a problem with it.
The conversation is loaded with the method:
refUneConversationMyUserCol.ref.orderBy('date', 'desc').limit(20).get()
To this is added an "onSnapshot" request to retrieve the last message sent live
this.unsubscribeDataUneConversation = refUneConversationMyUserCol.ref.orderBy('date', 'desc').limit(1).onSnapshot(result => {
console.log(result.docs[0].data());
if (this.isCalledBySnapshot === false) {
this.isCalledBySnapshot = true;
} else if (result.docs[0].data().expediteur !== this.authentificationService.uidUserActif) {
const data = result.docs[0].data();
const id = result.docs[0].id;
this.dataUneConversation.push({ id, ...data } as UneConversation);
}
});
It will work perfectly however, when I send a message at the same time (with 2 different accounts talking to each other), I encounter a problem, the onSnapshot is triggered only once and I only receive one message.
I specify that the two messages are sent well in the database, they are only not displayed both during the live session
Do you have any idea why?
Thank you
(Here is the whole method)
async getDataUneConversation(idI: string) {
if (this.loadedDataUneConversation !== idI) {
/* ANCHOR Msg en direct */
this.isCalledBySnapshot = false;
if (this.unsubscribeDataUneConversation) {
await this.unsubscribeDataUneConversation();
}
const refUneConversationMyUserCol = this.afs.collection<User>('users').doc<User>(this.authentificationService.uidUserActif).collection<Conversations>('conversations');
const result = await refUneConversationMyUserCol.ref.orderBy('date', 'desc').limit(20).get();
/* ANCHOR Msg en direct */
this.unsubscribeDataUneConversation = refUneConversationMyUserCol.ref.orderBy('date', 'desc').limit(1).onSnapshot(result => {
console.log(result.docs[0].data());
if (this.isCalledBySnapshot === false) {
this.isCalledBySnapshot = true;
} else if (result.docs[0].data().expediteur !== this.authentificationService.uidUserActif) {
const data = result.docs[0].data();
const id = result.docs[0].id;
this.dataUneConversation.push({ id, ...data } as UneConversation);
}
});
/* ANCHOR Msg en brut */
if (result.docs.length < 20) {
this.infiniteLastUneConversationMax = true;
} else {
this.infiniteLastUneConversationMax = false;
}
this.infiniteLastUneConversation = result.docs[result.docs.length - 1];
this.dataUneConversation = result.docs.map(doc => {
const data = doc.data();
const id = doc.id;
return { id, ...data } as UneConversation;
});
this.dataUneConversation.reverse();
this.loadedDataUneConversation = idI;
}
}
EDIT for working :
this.unsubscribeDataUneConversation = refUneConversationMyUserCol.ref.orderBy('date', 'asc').startAfter(this.dataUneConversation[this.dataUneConversation.length
- 1].date).onSnapshot(result => {
result.docs.forEach(element => {
const data = element.data();
const id = element.id;
if (!this.dataUneConversation.some(e => e.id === element.id)) {
this.dataUneConversation.push({ id, ...data } as UneConversation);
}
});
});
You're limiting live messages to only one last message. In a chat app, you want to listen to all new messages. So the issue is probably in your .limit(1) clause.
But if you do that, I understand that you'll get the whole conversation, with all messages, since the conversation started.
My approach would be like this:
Get the date of the last message from your refUneConversationMyUserCol... conversation loader.
When you do the onSnapshot() to get the last message, do not limit to 1 message, instead, start at a date after the date of the last loaded message.
Since you're ordering by date anyway, this will be an easy fix. Look into "Adding a cursor to your query".
Basically, you'll be saying to Firestore: give me LIVE new messages but start at NOW - and even if there are many messages posted at the same time, you'll get them all, since you're not limiting to 1.
Feel free to ask if this is not clear enough.

React Native AsyncStorage | Row too big to fit into CursorWindow

I'm using AsyncStorage in ReactNative to store some data (large size >2MB) on device, and then read it with the following code
try {
const value = await AsyncStorage.getItem('date_stored_copy');
} catch (e) {
console.log(e);
}
I'm getting the following error:
Row too big to fit into CursorWindow requiredPos=0, totalRows=1...
Is there any way to increase CursorWindow size, or another alternative to AsyncStorage ?
An alternative solution would be to split the data into chunks and then writing it.
I write a wrapper that uses AsyncStorage that does exactly that: https://gist.github.com/bureyburey/2345dfa88a31e00a514479be37848d42
Be aware that it was originally written for using with apollo-cache-persist (a persistence lib for apollo-client).
And since graphql store the data in a very flat structure this solution works pretty well out of the box.
For your case, if your stored object looks like this:
{
data: { a lot of data here }
}
Then it wouldn't matter much and the wrapper won't work
But if your object looks like this:
{
someData: { partial data },
someMoreData: { more partial data },
....
}
Then in theory it should work.
Full disclosure: i haven't tested it thoroughly yet and only used it with apollo-cache-persist
I ran into this problem too, here is how I solved this issue :
Basic description of the algorithm :
The "key" holds the number of parts your data will be divided by. (Example : key is "MyElementToStore", its value is 7 for the number of parts your data needs to be split by to fit each part in a row of the AsyncStorage)
Each part will then be stored as an individual row in the AsyncStorage by having the name of the key followed by the index of the part. (Example : ["MyElementToStore0", "MyElementToStore1", ...]
Retrieving data works the other way around, each row is retrieved and aggregated to the result to return
Final note for clearing the store, it's important to remove each part before removing the key (use the last function "clearStore" to make sure you release memory correctly)
AsyncStorage documentation
import AsyncStorage from "#react-native-async-storage/async-storage";
const getStore = async (key) =>
{
try
{
let store = "";
let numberOfParts = await AsyncStorage.getItem(key);
if(typeof(numberOfParts) === 'undefined' || numberOfParts === null)
return null;
else
numberOfParts = parseInt(numberOfParts);
for (let i = 0; i < numberOfParts; i++) { store += await AsyncStorage.getItem(key + i); }
if(store === "")
return null;
return JSON.parse(store);
}
catch (error)
{
console.log("Could not get [" + key + "] from store.");
console.log(error);
return null;
}
};
const saveStore = async (key, data) =>
{
try
{
const store = JSON.stringify(data).match(/.{1,1000000}/g);
store.forEach((part, index) => { AsyncStorage.setItem((key + index), part); });
AsyncStorage.setItem(key, ("" + store.length));
}
catch (error)
{
console.log("Could not save store : ");
console.log(error.message);
}
};
const clearStore = async (key) =>
{
try
{
console.log("Clearing store for [" + key + "]");
let numberOfParts = await AsyncStorage.getItem(key);
if(typeof(numberOfParts) !== 'undefined' && numberOfParts !== null)
{
numberOfParts = parseInt(numberOfParts);
for (let i = 0; i < numberOfParts; i++) { AsyncStorage.removeItem(key + i); }
AsyncStorage.removeItem(key);
}
}
catch (error)
{
console.log("Could not clear store : ");
console.log(error.message);
}
};
I found another alternative mentioned here
Just install react-native-fs-store
npm i react-native-fs react-native-fs-store
react-native link react-native-fs
And use it like this:
import Store from "react-native-fs-store";
const AsyncStorage = new Store('store1');
it has has exactly same API as that of AsyncStorage, so no code changes are required
** Please notice that react-native-fs-store is slower than AsyncStorage, as each operation is synced to file. So you may notice lag (unresponsive screen) while reading/writing data
android/app/src/main/java/com/tamotam/mainApp/MainApplication.java
import android.database.CursorWindow;
import java.lang.reflect.Field;
...
#Override
public void onCreate() {
...
try {
Field field = CursorWindow.class.getDeclaredField("sCursorWindowSize");
field.setAccessible(true);
field.set(null, 100 * 1024 * 1024); // the 100MB is the new size
} catch (Exception e) {
e.printStackTrace();
}
}
Fixed the issue for me, remember to include the 2 imports!
As per https://github.com/andpor/react-native-sqlite-storage/issues/364#issuecomment-665800433 there might be an addition check if (DEBUG_MODE)... in some solutions, but it caused Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0. in my case.

How can I delete a post from a supergroup in telegram with telegram-cli?

we have a group in telegram and we have a rule says no one must leave a message in group between 23 to 7 am , I wanna delete messages comes to group between these times automatically . could anyone tell me how I can do that with telegram cli or any other telegram client?
Use new version of telegram-cli. It's not fully open source, but you can download a binary from its site. Also you can find some examples there.
I hope the following snippet in JavaScript will help you to achieve your goal.
var spawn = require('child_process').spawn;
var readline = require('readline');
// delay between restarts of the client in case of failure
const RESTARTING_DELAY = 1000;
// the main object for a process of telegram-cli
var tg;
function launchTelegram() {
tg = spawn('./telegram-cli', ['--json', '-DCR'],
{ stdio: ['ipc', 'pipe', process.stderr] });
readline.createInterface({ input: tg.stdout }).on('line', function(data) {
try {
var obj = JSON.parse(data);
} catch (err) {
if (err.name == 'SyntaxError') {
// sometimes client sends not only json, plain text process is not
// necessary, just output for easy debugging
console.log(data.toString());
} else {
throw err;
}
}
if (obj) {
processUpdate(obj);
}
});
tg.on('close', function(code) {
// sometimes telegram-cli fails due to bugs, then try to restart it
// skipping problematic messages
setTimeout(function(tg) {
tg.kill(); // the program terminates by sending double SIGINT
tg.kill();
tg.on('close', launchTelegram); // start again for updates
// as soon as it is finished
}, RESTARTING_DELAY, spawn('./telegram-cli', { stdio: 'inherit' }));
});
}
function processUpdate(upd) {
var currentHour = Date.now().getHours();
if (23 <= currentHour && currentHour < 7 &&
upd.ID='UpdateNewMessage' && upd.message_.can_be_deleted_) {
// if the message meets certain criteria, send a command to telegram-cli to
// delete it
tg.send({
'ID': 'DeleteMessages',
'chat_id_': upd.message_.chat_id_,
'message_ids_': [ upd.message_.id_ ]
});
}
}
launchTelegram(); // just launch these gizmos
We activate JSON mode passing --json key. telegram-cli appends underscore to all fields in objects. See all available methods in full schema.

Resources