How to avoid blockin while uploading file using Meteor method - meteor

I've created a Meteor method to upload a file, it's working well but until the file is fully uploaded, I cannot move around, all subscriptions seem to wait that the upload finishes... is there a way to avoid that ?
Here is the code on the server :
Meteor.publish('product-photo', function (productId) {
return Meteor.photos.find({productId: productId}, {limit: 1});
});
Meteor.methods({
/**
* Creates an photo
* #param obj
* #return {*}
*/
createPhoto: function (obj) {
check(obj, Object);
// Filter attributes
obj = filter(obj, [
'name',
'productId',
'size',
'type',
'url'
]);
// Check user
if (!this.userId) {
throw new Meteor.Error('not-connected');
}
// Check file name
if (typeof obj.name !== 'string' || obj.name.length > 255) {
throw new Meteor.Error('invalid-file-name');
}
// Check file type
if (typeof obj.type !== 'string' || [
'image/gif',
'image/jpg',
'image/jpeg',
'image/png'
].indexOf(obj.type) === -1) {
throw new Meteor.Error('invalid-file-type');
}
// Check file url
if (typeof obj.url !== 'string' || obj.url.length < 1) {
throw new Meteor.Error('invalid-file-url');
}
// Check file size
if (typeof obj.size !== 'number' || obj.size <= 0) {
throw new Meteor.Error('invalid-file-size');
}
// Check file max size
if (obj.size > 1024 * 1024) {
throw new Meteor.Error('file-too-large');
}
// Check if product exists
if (!obj.productId || Meteor.products.find({_id: obj.productId}).count() !== 1) {
throw new Meteor.Error('product-not-found');
}
// Limit the number of photos per user
if (Meteor.photos.find({productId: obj.productId}).count() >= 3) {
throw new Meteor.Error('max-photos-reached');
}
// Resize the photo if the data is in base64
if (typeof obj.url === 'string' && obj.url.indexOf('data:') === 0) {
obj.url = resizeImage(obj.url, 400, 400);
obj.size = obj.url.length;
obj.type = 'image/png';
}
// Add info
obj.createdAt = new Date();
obj.userId = this.userId;
return Meteor.photos.insert(obj);
}
});
And the code on the client :
Template.product.events({
'change [name=photo]': function (ev) {
var self = this;
readFilesAsDataURL(ev, function (event, file) {
var photo = {
name: file.name,
productId: self._id,
size: file.size,
type: file.type,
url: event.target.result
};
Session.set('uploadingPhoto', true);
// Save the file
Meteor.call('createPhoto', photo, function (err, photoId) {
Session.set('uploadingPhoto', false);
if (err) {
displayError(err);
} else {
notify(i18n("Transfert terminé pour {{name}}", photo));
}
});
});
}
});

I finally found the solution myself.
Explication : the code I used was blocking the subscriptions because it was using only one method call to transfer all the file from the first byte to the last one, that leads to block the thread (I think, the one reserved to each users on the server) until the transfer is complete.
Solution : I splitted the file into chunks of about 8KB, and send chunk by chunk, this way the thread or whatever was blocking the subscriptions is free after each chunk transfer.
The final working solution is on that post : How to write a file from an ArrayBuffer in JS
Client Code
// data comes from file.readAsArrayBuffer();
var total = data.byteLength;
var offset = 0;
var upload = function() {
var length = 4096; // chunk size
// adjust the last chunk size
if (offset + length > total) {
length = total - offset;
}
// I am using Uint8Array to create the chunk
// because it can be passed to the Meteor.method natively
var chunk = new Uint8Array(data, offset, length);
if (offset < total) {
// Send the chunk to the server and tell it what file to append to
Meteor.call('uploadFileData', fileId, chunk, function (err, length) {
if (!err) {
offset += length;
upload();
}
}
}
};
upload();
Server code
var fs = Npm.require('fs');
var Future = Npm.require('fibers/future');
Meteor.methods({
uploadFileData: function(fileId, chunk) {
var fut = new Future();
var path = '/uploads/' + fileId;
// I tried that with no success
chunk = String.fromCharCode.apply(null, chunk);
// how to write the chunk that is an Uint8Array to the disk ?
fs.appendFile(path, new Buffer(chunk), function (err) {
if (err) {
fut.throw(err);
} else {
fut.return(chunk.length);
}
});
return fut.wait();
}
});

Improving #Karl's code:
Client
This function breaks the file into chunks and sends them to the server one by one.
function uploadFile(file) {
const reader = new FileReader();
let _offset = 0;
let _total = file.size;
return new Promise((resolve, reject) => {
function readChunk() {
var length = 10 * 1024; // chunk size
// adjust the last chunk size
if (_offset + length > _total) {
length = _total - _offset;
}
if (_offset < _total) {
const slice = file.slice(_offset, _offset + length);
reader.readAsArrayBuffer(slice);
} else {
// EOF
setProgress(100);
resolve(true);
}
}
reader.onload = function readerOnload() {
let buffer = new Uint8Array(reader.result) // convert to binary
Meteor.call('fileUpload', file.name, buffer, _offset,
(error, length) => {
if (error) {
console.log('Oops, unable to import!');
return false;
} else {
_offset += length;
readChunk();
}
}
);
};
reader.onloadend = function readerOnloadend() {
setProgress(100 * _offset / _total);
};
readChunk();
});
}
Server
The server then writes to a file when offset is zero, or appends to its end otherwise, returning a promise, as I used an asynchronous function to write/append in order to avoid blocking the client.
if (Meteor.isServer) {
var fs = require('fs');
var Future = require('fibers/future');
}
Meteor.methods({
// Upload file from client to server
fileUpload(
fileName: string,
fileData: Uint8Array,
offset: number) {
check(fileName, String);
check(fileData, Uint8Array);
check(offset, Number);
console.log(`[x] Received file ${fileName} data length: ${fileData.length}`);
if (Meteor.isServer) {
const fut = new Future();
const filePath = '/tmp/' + fileName;
const buffer = new Buffer(fileData);
const jot = offset === 0 ? fs.writeFile : fs.appendFile;
jot(filePath, buffer, 'binary', (err) => {
if (err) {
fut.throw(err);
} else {
fut.return(buffer.length);
}
});
return fut.wait();
}
}
)};
Usage
uploadFile(file)
.then(() => {
/* do your stuff */
});

Related

How to get the entire path in Next.js 13 in custom loader function, for server components only?

I have a loader function called getBlogData which is like this:
import { getFromCacheOrApi } from 'Base'
const getBlogData = async () => {
const { pathname } = { pathname: "" }
var url = '/blog/data?'
let matches = /\/blog(\/\d+)?\/?$/.exec(pathname)
if (matches != null) {
const pageNumber = matches[1]
if (pageNumber !== undefined) {
url += `&pageNumber=${pageNumber.replace('/', '')}`
}
}
else {
const secondSegments = ['category', 'tag', 'author', 'search']
if (pathname.split('/').length >= 2 && !secondSegments.includes(pathname.split('/')[2])) {
response.status = 404
return
}
for (let i = 0; i < secondSegments.length; i++) {
const segment = secondSegments[i]
if (pathname.startsWith(`/blog/${segment}`)) {
matches = new RegExp(`(?<=\\/blog\\/${segment}\\/)[^/]+\\/?(\\d+)?\\/?$`).exec(pathname)
if (matches == null) {
response.status = 404
return
}
else {
url += `&${segment}=${encodeURI(matches[0].split('/')[0])}`
const pageNumber = matches[1]
if (pageNumber !== undefined) {
url += `&pageNumber=${pageNumber}`
}
break
}
}
}
}
url = url.replace('?&', '?')
const data = await getFromCacheOrApi(url)
// console.log(params, response.status)
// if (pageNumber && isNaN(pageNumber)) {
// console.log(pageNumber, isNaN(pageNumber))
// response.status = 400
// return
// }
const { seoParameters } = data
return data
}
export default getBlogData
This function is only used in my page which is inside app directory in Next 13, which means that it's a server component, and I don't want to change it to a client component.
However, I need to access request data, in this particular case, the path of the URL.
How can I get that?

Firestore: remove document and collections and same time [duplicate]

How can you delete a Document with all it's collections and nested subcollections? (inside the functions environment)
In the RTDB you can ref.child('../someNode).setValue(null) and that completes the desired behavior.
I can think of two ways you could achieve the desired delete behavior, both with tremendously ghastly drawbacks.
Create a 'Super' function that will spider every document and delete them in a batch.
This function would be complicated, brittle to changes, and might take a lengthy execution time.
Add 'onDelete' triggers for each Document type, and make it delete any direct subcollections. You'll call delete on the root document, and the deletion calls will propagate down the 'tree'. This is sluggish, scales atrociously and is costly due to the colossal load of function executions.
Imagine you would have to delete a 'GROUP' and all it's children. It would be deeply chaotic with #1 and pricey with #2 (1 function call per doc)
groups > GROUP > projects > PROJECT > files > FILE > assets > ASSET
> urls > URL
> members > MEMBER
> questions > QUESTION > answers > ANSWER > replies > REPLY
> comments > COMMENT
> resources > RESOURCE > submissions > SUBMISSION
> requests > REQUEST
Is there a superior/favored/cleaner way to delete a document and all it's nested subcollections?
It ought to be possible considering you can do it from the console.
according to firebase documentation:
https://firebase.google.com/docs/firestore/solutions/delete-collections
Deleting collection with nested subcollections might be done easy and neat with node-JS on the server side.
const client = require('firebase-tools');
await client.firestore
.delete(collectionPath, {
project: process.env.GCLOUD_PROJECT,
recursive: true,
yes: true
});
Unfortunately, your analysis is spot on and indeed this use case does require a lot of ceremony. According to official documentation, there is no support for deep deletes in a single shot in firestore neither via client libraries nor rest-api nor the cli tool.
The cli is open sourced and its implementation lives here: https://github.com/firebase/firebase-tools/blob/master/src/firestore/delete.js. They basically implemented option 1. you described in your question, so you can take some inspiration from there.
Both options 1. and 2. are far from ideal situation and to make your solution 100% reliable you will need to keep a persistent queue with deletion tasks, as any error in the long running procedure will leave your system in some ill-defined state.
I would discourage to go with raw option 2. as recursive cloud function calls may very easily went wrong - for example, hitting max. limits.
In case the link changed, below the full source of https://github.com/firebase/firebase-tools/blob/master/src/firestore/delete.js:
"use strict";
var clc = require("cli-color");
var ProgressBar = require("progress");
var api = require("../api");
var firestore = require("../gcp/firestore");
var FirebaseError = require("../error");
var logger = require("../logger");
var utils = require("../utils");
/**
* Construct a new Firestore delete operation.
*
* #constructor
* #param {string} project the Firestore project ID.
* #param {string} path path to a document or collection.
* #param {boolean} options.recursive true if the delete should be recursive.
* #param {boolean} options.shallow true if the delete should be shallow (non-recursive).
* #param {boolean} options.allCollections true if the delete should universally remove all collections and docs.
*/
function FirestoreDelete(project, path, options) {
this.project = project;
this.path = path;
this.recursive = Boolean(options.recursive);
this.shallow = Boolean(options.shallow);
this.allCollections = Boolean(options.allCollections);
// Remove any leading or trailing slashes from the path
if (this.path) {
this.path = this.path.replace(/(^\/+|\/+$)/g, "");
}
this.isDocumentPath = this._isDocumentPath(this.path);
this.isCollectionPath = this._isCollectionPath(this.path);
this.allDescendants = this.recursive;
this.parent = "projects/" + project + "/databases/(default)/documents";
// When --all-collections is passed any other flags or arguments are ignored
if (!options.allCollections) {
this._validateOptions();
}
}
/**
* Validate all options, throwing an exception for any fatal errors.
*/
FirestoreDelete.prototype._validateOptions = function() {
if (this.recursive && this.shallow) {
throw new FirebaseError("Cannot pass recursive and shallow options together.");
}
if (this.isCollectionPath && !this.recursive && !this.shallow) {
throw new FirebaseError("Must pass recursive or shallow option when deleting a collection.");
}
var pieces = this.path.split("/");
if (pieces.length === 0) {
throw new FirebaseError("Path length must be greater than zero.");
}
var hasEmptySegment = pieces.some(function(piece) {
return piece.length === 0;
});
if (hasEmptySegment) {
throw new FirebaseError("Path must not have any empty segments.");
}
};
/**
* Determine if a path points to a document.
*
* #param {string} path a path to a Firestore document or collection.
* #return {boolean} true if the path points to a document, false
* if it points to a collection.
*/
FirestoreDelete.prototype._isDocumentPath = function(path) {
if (!path) {
return false;
}
var pieces = path.split("/");
return pieces.length % 2 === 0;
};
/**
* Determine if a path points to a collection.
*
* #param {string} path a path to a Firestore document or collection.
* #return {boolean} true if the path points to a collection, false
* if it points to a document.
*/
FirestoreDelete.prototype._isCollectionPath = function(path) {
if (!path) {
return false;
}
return !this._isDocumentPath(path);
};
/**
* Construct a StructuredQuery to find descendant documents of a collection.
*
* See:
* https://firebase.google.com/docs/firestore/reference/rest/v1beta1/StructuredQuery
*
* #param {boolean} allDescendants true if subcollections should be included.
* #param {number} batchSize maximum number of documents to target (limit).
* #param {string=} startAfter document name to start after (optional).
* #return {object} a StructuredQuery.
*/
FirestoreDelete.prototype._collectionDescendantsQuery = function(
allDescendants,
batchSize,
startAfter
) {
var nullChar = String.fromCharCode(0);
var startAt = this.parent + "/" + this.path + "/" + nullChar;
var endAt = this.parent + "/" + this.path + nullChar + "/" + nullChar;
var where = {
compositeFilter: {
op: "AND",
filters: [
{
fieldFilter: {
field: {
fieldPath: "__name__",
},
op: "GREATER_THAN_OR_EQUAL",
value: {
referenceValue: startAt,
},
},
},
{
fieldFilter: {
field: {
fieldPath: "__name__",
},
op: "LESS_THAN",
value: {
referenceValue: endAt,
},
},
},
],
},
};
var query = {
structuredQuery: {
where: where,
limit: batchSize,
from: [
{
allDescendants: allDescendants,
},
],
select: {
fields: [{ fieldPath: "__name__" }],
},
orderBy: [{ field: { fieldPath: "__name__" } }],
},
};
if (startAfter) {
query.structuredQuery.startAt = {
values: [{ referenceValue: startAfter }],
before: false,
};
}
return query;
};
/**
* Construct a StructuredQuery to find descendant documents of a document.
* The document itself will not be included
* among the results.
*
* See:
* https://firebase.google.com/docs/firestore/reference/rest/v1beta1/StructuredQuery
*
* #param {boolean} allDescendants true if subcollections should be included.
* #param {number} batchSize maximum number of documents to target (limit).
* #param {string=} startAfter document name to start after (optional).
* #return {object} a StructuredQuery.
*/
FirestoreDelete.prototype._docDescendantsQuery = function(allDescendants, batchSize, startAfter) {
var query = {
structuredQuery: {
limit: batchSize,
from: [
{
allDescendants: allDescendants,
},
],
select: {
fields: [{ fieldPath: "__name__" }],
},
orderBy: [{ field: { fieldPath: "__name__" } }],
},
};
if (startAfter) {
query.structuredQuery.startAt = {
values: [{ referenceValue: startAfter }],
before: false,
};
}
return query;
};
/**
* Query for a batch of 'descendants' of a given path.
*
* For document format see:
* https://firebase.google.com/docs/firestore/reference/rest/v1beta1/Document
*
* #param {boolean} allDescendants true if subcollections should be included,
* #param {number} batchSize the maximum size of the batch.
* #param {string=} startAfter the name of the document to start after (optional).
* #return {Promise<object[]>} a promise for an array of documents.
*/
FirestoreDelete.prototype._getDescendantBatch = function(allDescendants, batchSize, startAfter) {
var url;
var body;
if (this.isDocumentPath) {
url = this.parent + "/" + this.path + ":runQuery";
body = this._docDescendantsQuery(allDescendants, batchSize, startAfter);
} else {
url = this.parent + ":runQuery";
body = this._collectionDescendantsQuery(allDescendants, batchSize, startAfter);
}
return api
.request("POST", "/v1beta1/" + url, {
auth: true,
data: body,
origin: api.firestoreOrigin,
})
.then(function(res) {
// Return the 'document' property for each element in the response,
// where it exists.
return res.body
.filter(function(x) {
return x.document;
})
.map(function(x) {
return x.document;
});
});
};
/**
* Progress bar shared by the class.
*/
FirestoreDelete.progressBar = new ProgressBar("Deleted :current docs (:rate docs/s)", {
total: Number.MAX_SAFE_INTEGER,
});
/**
* Repeatedly query for descendants of a path and delete them in batches
* until no documents remain.
*
* #return {Promise} a promise for the entire operation.
*/
FirestoreDelete.prototype._recursiveBatchDelete = function() {
var self = this;
// Tunable deletion parameters
var readBatchSize = 7500;
var deleteBatchSize = 250;
var maxPendingDeletes = 15;
var maxQueueSize = deleteBatchSize * maxPendingDeletes * 2;
// All temporary variables for the deletion queue.
var queue = [];
var numPendingDeletes = 0;
var pagesRemaining = true;
var pageIncoming = false;
var lastDocName;
var failures = [];
var retried = {};
var queueLoop = function() {
if (queue.length == 0 && numPendingDeletes == 0 && !pagesRemaining) {
return true;
}
if (failures.length > 0) {
logger.debug("Found " + failures.length + " failed deletes, failing.");
return true;
}
if (queue.length <= maxQueueSize && pagesRemaining && !pageIncoming) {
pageIncoming = true;
self
._getDescendantBatch(self.allDescendants, readBatchSize, lastDocName)
.then(function(docs) {
pageIncoming = false;
if (docs.length == 0) {
pagesRemaining = false;
return;
}
queue = queue.concat(docs);
lastDocName = docs[docs.length - 1].name;
})
.catch(function(e) {
logger.debug("Failed to fetch page after " + lastDocName, e);
pageIncoming = false;
});
}
if (numPendingDeletes > maxPendingDeletes) {
return false;
}
if (queue.length == 0) {
return false;
}
var toDelete = [];
var numToDelete = Math.min(deleteBatchSize, queue.length);
for (var i = 0; i < numToDelete; i++) {
toDelete.push(queue.shift());
}
numPendingDeletes++;
firestore
.deleteDocuments(self.project, toDelete)
.then(function(numDeleted) {
FirestoreDelete.progressBar.tick(numDeleted);
numPendingDeletes--;
})
.catch(function(e) {
// For server errors, retry if the document has not yet been retried.
if (e.status >= 500 && e.status < 600) {
logger.debug("Server error deleting doc batch", e);
// Retry each doc up to one time
toDelete.forEach(function(doc) {
if (retried[doc.name]) {
logger.debug("Failed to delete doc " + doc.name + " multiple times.");
failures.push(doc.name);
} else {
retried[doc.name] = true;
queue.push(doc);
}
});
} else {
logger.debug("Fatal error deleting docs ", e);
failures = failures.concat(toDelete);
}
numPendingDeletes--;
});
return false;
};
return new Promise(function(resolve, reject) {
var intervalId = setInterval(function() {
if (queueLoop()) {
clearInterval(intervalId);
if (failures.length == 0) {
resolve();
} else {
reject("Failed to delete documents " + failures);
}
}
}, 0);
});
};
/**
* Delete everything under a given path. If the path represents
* a document the document is deleted and then all descendants
* are deleted.
*
* #return {Promise} a promise for the entire operation.
*/
FirestoreDelete.prototype._deletePath = function() {
var self = this;
var initialDelete;
if (this.isDocumentPath) {
var doc = { name: this.parent + "/" + this.path };
initialDelete = firestore.deleteDocument(doc).catch(function(err) {
logger.debug("deletePath:initialDelete:error", err);
if (self.allDescendants) {
// On a recursive delete, we are insensitive to
// failures of the initial delete
return Promise.resolve();
}
// For a shallow delete, this error is fatal.
return utils.reject("Unable to delete " + clc.cyan(this.path));
});
} else {
initialDelete = Promise.resolve();
}
return initialDelete.then(function() {
return self._recursiveBatchDelete();
});
};
/**
* Delete an entire database by finding and deleting each collection.
*
* #return {Promise} a promise for all of the operations combined.
*/
FirestoreDelete.prototype.deleteDatabase = function() {
var self = this;
return firestore
.listCollectionIds(this.project)
.catch(function(err) {
logger.debug("deleteDatabase:listCollectionIds:error", err);
return utils.reject("Unable to list collection IDs");
})
.then(function(collectionIds) {
var promises = [];
logger.info("Deleting the following collections: " + clc.cyan(collectionIds.join(", ")));
for (var i = 0; i < collectionIds.length; i++) {
var collectionId = collectionIds[i];
var deleteOp = new FirestoreDelete(self.project, collectionId, {
recursive: true,
});
promises.push(deleteOp.execute());
}
return Promise.all(promises);
});
};
/**
* Check if a path has any children. Useful for determining
* if deleting a path will affect more than one document.
*
* #return {Promise<boolean>} a promise that retruns true if the path has
* children and false otherwise.
*/
FirestoreDelete.prototype.checkHasChildren = function() {
return this._getDescendantBatch(true, 1).then(function(docs) {
return docs.length > 0;
});
};
/**
* Run the delete operation.
*/
FirestoreDelete.prototype.execute = function() {
var verifyRecurseSafe;
if (this.isDocumentPath && !this.recursive && !this.shallow) {
verifyRecurseSafe = this.checkHasChildren().then(function(multiple) {
if (multiple) {
return utils.reject("Document has children, must specify -r or --shallow.", { exit: 1 });
}
});
} else {
verifyRecurseSafe = Promise.resolve();
}
var self = this;
return verifyRecurseSafe.then(function() {
return self._deletePath();
});
};
module.exports = FirestoreDelete;
For those who don't want or can't use cloud functions, I found a recursiveDelete function in the admin sdk:
https://googleapis.dev/nodejs/firestore/latest/Firestore.html#recursiveDelete
// Recursively delete a reference and log the references of failures.
const bulkWriter = firestore.bulkWriter();
bulkWriter
.onWriteError((error) => {
if (error.failedAttempts < MAX_RETRY_ATTEMPTS) {
return true;
} else {
console.log('Failed write at document: ', error.documentRef.path);
return false;
}
});
await firestore.recursiveDelete(docRef, bulkWriter);
i don't know how much helpful for you but test it and compare the execution time which i get use it from fire store doc
/** Delete a collection in batches to avoid out-of-memory errors.
* Batch size may be tuned based on document size (atmost 1MB) and application requirements.
*/
void deleteCollection(CollectionReference collection, int batchSize) {
try {
// retrieve a small batch of documents to avoid out-of-memory errors
ApiFuture<QuerySnapshot> future = collection.limit(batchSize).get();
int deleted = 0;
// future.get() blocks on document retrieval
List<QueryDocumentSnapshot> documents = future.get().getDocuments();
for (QueryDocumentSnapshot document : documents) {
document.getReference().delete();
++deleted;
}
if (deleted >= batchSize) {
// retrieve and delete another batch
deleteCollection(collection, batchSize);
}
} catch (Exception e) {
System.err.println("Error deleting collection : " + e.getMessage());
}
}
As mentioned above, you need to write good bit of code for this. For each document that is to be deleted you need to check if it has one or more collections. If it does, then you need to queue those up for deletion too. I wrote the code below to do this. It's not tested to be scalable to large data sets, which is fine for me as I'm using it to clean up after small scale integration tests. If you need something more scalable, feel free to take this as a starting point and play around with batching more.
class FirebaseDeleter {
constructor(database, collections) {
this._database = database;
this._pendingCollections = [];
}
run() {
return new Promise((resolve, reject) => {
this._callback = resolve;
this._database.getCollections().then(collections => {
this._pendingCollections = collections;
this._processNext();
});
});
}
_processNext() {
const collections = this._pendingCollections;
this._pendingCollections = [];
const promises = collections.map(collection => {
return this.deleteCollection(collection, 10000);
});
Promise.all(promises).then(() => {
if (this._pendingCollections.length == 0) {
this._callback();
} else {
process.nextTick(() => {
this._processNext();
});
}
});
}
deleteCollection(collectionRef, batchSize) {
var query = collectionRef;
return new Promise((resolve, reject) => {
this.deleteQueryBatch(query, batchSize, resolve, reject);
});
}
deleteQueryBatch(query, batchSize, resolve, reject) {
query
.get()
.then(snapshot => {
// When there are no documents left, we are done
if (snapshot.size == 0) {
return 0;
}
// Delete documents in a batch
var batch = this._database.batch();
const collectionPromises = [];
snapshot.docs.forEach(doc => {
collectionPromises.push(
doc.ref.getCollections().then(collections => {
collections.forEach(collection => {
this._pendingCollections.push(collection);
});
})
);
batch.delete(doc.ref);
});
// Wait until we know if all the documents have collections before deleting them.
return Promise.all(collectionPromises).then(() => {
return batch.commit().then(() => {
return snapshot.size;
});
});
})
.then(numDeleted => {
if (numDeleted === 0) {
resolve();
return;
}
// Recurse on the next process tick, to avoid
// exploding the stack.
process.nextTick(() => {
this.deleteQueryBatch(query, batchSize, resolve, reject);
});
})
.catch(reject);
}
}
Solution using Node.js Admin SDK
export const deleteDocument = async (doc: FirebaseFirestore.DocumentReference) => {
const collections = await doc.listCollections()
await Promise.all(collections.map(collection => deleteCollection(collection)))
await doc.delete()
}
export const deleteCollection = async (collection: FirebaseFirestore.CollectionReference) => {
const query = collection.limit(100)
while (true) {
const snap = await query.get()
if (snap.empty) {
return
}
await Promise.all(snap.docs.map(doc => deleteDocument(doc.ref)))
}
}
There is now a simple way delete a document and all of its subcollections using NodeJS.
This was made available in nodejs-firestore version v4.11.0.
From the docs:
recursiveDelete()
Recursively deletes all documents and subcollections at and under the specified level.
import * as admin from 'firebase-admin'
const ref = admin.firestore().doc('my_document')
admin.firestore().recursiveDelete(ref)
You can write a handler which will recursive delete all nested descendants when triggers onDelete Firestore event.
Example of handler:
const deleteDocumentWithDescendants = async (documentSnap: FirebaseFirestore.QueryDocumentSnapshot) => {
return documentSnap.ref.listCollections().then((subCollections) => {
subCollections.forEach((subCollection) => {
return subCollection.get().then((snap) => {
snap.forEach((doc) => {
doc.ref.delete();
deleteDocumentWithDescendants(doc);
});
});
});
});
};
// On any document delete
export const onDocumentDelete = async (documentSnap: FirebaseFirestore.QueryDocumentSnapshot) => {
await deleteDocumentWithDescendants(documentSnap);
};
Tie it up with firestore event:
exports.onDeleteDocument = functions.firestore.document('{collectionId}/{docId}')
.onDelete(onDocumentDelete);
// You can add all the collection hierarchy to object
private collectionsHierarchy = {
groups: [
[
'groups',
'projects',
'files',
'assets',
'urls',
'members'
]
]
};
async deleteDocument(rootDocument: string) {
// if (!rootDocument.startsWith(`groups/${this.groupId()}`)) {
// rootDocument = `groups/${this.groupId()}/${rootDocument}`;
// }
const batchSize: number = 100;
let root = await this.db
.doc(rootDocument)
.get()
.toPromise();
if (!root.exists) {
return;
}
const segments = rootDocument.split('/');
const documentCollection = segments[segments.length - 2];
const allHierarchies = this.collectionsHierarchy[documentCollection];
for (let i = 0; i < allHierarchies.length; i = i + 1) {
const hierarchy = allHierarchies[i];
const collectionIndex = hierarchy.indexOf(documentCollection) + 1;
const nextCollections: [] = hierarchy.slice(collectionIndex);
const stack = [`${root.ref.path}/${nextCollections.shift()}`];
while (stack.length) {
const path = stack.pop();
const collectionRef = this.db.firestore.collection(path);
const query = collectionRef.orderBy('__name__').limit(batchSize);
let deletedIems = await this.deleteQueryBatch(query, batchSize);
const nextCollection = nextCollections.shift();
deletedIems = deletedIems.map(di => `${di}/${nextCollection}`);
stack.push(...deletedIems);
}
}
await root.ref.delete();
}
private async deleteQueryBatch(
query: firebase.firestore.Query,
batchSize: number
) {
let deletedItems: string[] = [];
let snapshot = await query.get();
if (snapshot.size === 0) {
return deletedItems;
}
const batch = this.db.firestore.batch();
snapshot.docs.forEach(doc => {
deletedItems.push(doc.ref.path);
batch.delete(doc.ref);
});
await batch.commit();
if (snapshot.size === 0) {
return deletedItems;
}
const result = await this.deleteQueryBatch(query, batchSize);
return [...deletedItems, ...result];
}
Another solution using Node.js Admin SDK with Batch.
const traverseDocumentRecursively = async (
docRef: FirebaseFirestore.DocumentReference<FirebaseFirestore.DocumentData>,
accumulatedRefs: FirebaseFirestore.DocumentReference<FirebaseFirestore.DocumentData>[],
) => {
const collections = await docRef.listCollections();
if (collections.length > 0) {
for (const collection of collections) {
const snapshot = await collection.get();
for (const doc of snapshot.docs) {
accumulatedRefs.push(doc.ref);
await traverseDocumentRecursively(doc.ref, accumulatedRefs);
}
}
}
};
import { chunk } from 'lodash';
const doc = admin.firestore().collection('users').doc('001');
const accumulatedRefs: FirebaseFirestore.DocumentReference<FirebaseFirestore.DocumentData>[] = [];
await traverseDocumentRecursively(doc, accumulatedRefs);
await Promise.all(
// Each transaction or batch of writes can write to a maximum of 500 documents
chunk(accumulatedRefs, 500).map((chunkedRefs) => {
const batch = admin.firestore().batch();
for (const ref of chunkedRefs) {
batch.delete(ref);
}
return batch.commit();
}),
);
Not sure if this is helpful for anyone here, but I am frequently facing the error "Fatal error deleting docs <list of docs>" when using firebase-tools.firestore.delete method (firebase-tools version 9.22.0).
I am currently handling these deletion failures using the returned error message in order to avoid rewriting the code cited at Oleg Bondarenko's answer. It uses admin.firestore to effectively delete the failed docs.
It's a poor solution since it relies on the error message, but at least it doesn't force us to copy the whole FirestoreDelete code to modify just a few lines of it:
firebase_tools.firestore
.delete(path, {
project: JSON.parse(process.env.FIREBASE_CONFIG!).projectId,
recursive: true,
yes: true,
token: getToken(),
})
.catch((err: Error) => {
if (err.name == "FirebaseError") {
// If recursive delete fails to delete some of the documents,
// parse the failures from the error message and delete it manually
const failedDeletingDocs = err.message.match(
/.*Fatal error deleting docs ([^\.]+)/
);
if (failedDeletingDocs) {
const docs = failedDeletingDocs[1].split(", ");
const docRefs = docs.map((doc) =>
firestore.doc(doc.slice(doc.search(/\(default\)\/documents/) + 19))
);
firestore
.runTransaction(async (t) => {
docRefs.forEach((doc) => t.delete(doc));
return docs;
})
.then((docs) =>
console.log(
"Succesfully deleted docs after failing: " + docs.join(", ")
)
)
.catch((err) => console.error(err));
}
}
});
If you are looking to delete user data, a solution to consider in 2022 is the Delete User Data Firebase Extension.
Once this is active, you can simply delete the user from Firebase Auth to trigger the recursive deletion of the user documents:
import admin from "firebase-admin";
admin.auth().deleteUser(userId);
You can call firebase.firestore().doc("whatever").set() and that will delete everything in that document.
The only way .set does not erase everything is if you set the merge flag to true.
See Firestore Documentation on Add Data
var cityRef = db.collection('cities').doc('BJ');
var setWithMerge = cityRef.set({
capital: true
}, { merge: true });

Stop scanning after few seconds, BLE Scanning using react-native-ble-plx

I'm currently using polidea's react-native-ble-plx library to do BLE scanning.
I do not want it to continue scanning, I just want to capture those scanned after a specified time limit.
Is there a way to do this?
Code:
export const scan = function scan() {
const subscription = DeviceManager.onStateChange((state) => {
if (state === 'PoweredOn') {
DeviceManager.startDeviceScan(null, null, (error, device) => {
if (error) {
console.log('error', error);
}
if (device !== null) {
console.log('device found ----> [id,name]', device.id, device.name);
}
});
subscription.remove();
}
}, true);
};
Output:
Output Image
I would do this simply by creating a timer variable outside the scope of this function, with each iteration of the scan callback handler checking to see how much time is passed, stopping the scanning if it is over a certain amount.
let startTime = new Date();
export const scan = function scan() {
const subscription = DeviceManager.onStateChange((state) => {
if (state === 'PoweredOn') {
DeviceManager.startDeviceScan(null, null, (error, device) => {
endTime = new Date();
var timeDiff = endTime - startTime; //in ms
// strip the ms
timeDiff /= 1000;
// get seconds
var seconds = Math.round(timeDiff);
if (error) {
console.log('error', error);
}
if (device !== null) {
console.log('device found ----> [id,name]', device.id, device.name);
}
if (seconds > 5) {
DeviceManager.stopDeviceScan(); //stop scanning if more than 5 secs passed
}
});
subscription.remove();
}
}, true);
};

Modifying function onDndUploadDrop in _AlfDndDocumentUploadMixin.js file

I have alfresco share/repo version 5.2.3
I'm trying to modifiy the function found in _AlfDndDocumentUploadMixin.js with the fix that has been posted on https://github.com/Alfresco/Aikau/blob/1.0.101_hotfixes/aikau/src/main/resources/alfresco/documentlibrary/_AlfDndDocumentUploadMixin.js
I tried extending the attachment-doclib.xml and get the jsonmodul, but it gives me a null pointer.
My code is the following
From attachment-doclib.get.js
var documentServices = model.jsonModel;
for (var i=0; i<documentServices.length; i++)
{
if (documentServices[i] === "alfresco/documentlibrary")
{
documentServices[i] = "js/aikau/1.0.101.10/alfresco/documentlibrary/my-documentlibrary/_AlfDndDocumentUploadMixin-extension";
}
else if (documentServices[i].name === "alfresco/documentlibrary")
{
documentServices[i].name = "js/aikau/1.0.101.10/alfresco/documentlibrary/my-documentlibrary/_AlfDndDocumentUploadMixin-extension";
}
}
from doclib-customizations.xml
<config evaluator="string-compare" condition="WebFramework" replace="false">
<web-framework>
<dojo-pages>
<packages>
<package name="documentlibrary" location="js/aikau/1.0.101.10/alfresco/documentlibrary" />
</packages>
</dojo-pages>
</web-framework>
</config>
From _alfDndDocumentUploadMixin-extended.js
onDndUploadDrop: function alfresco_documentlibrary__AlfDndDocumentUploadMixin__onDndUploadDrop(evt) {
try
{
// Only perform a file upload if the user has *actually* dropped some files!
this.alfLog("log", "Upload drop detected", evt);
if (evt.dataTransfer.files !== undefined && evt.dataTransfer.files !== null && evt.dataTransfer.files.length > 0)
{
this.removeDndHighlight();
var destination = this._currentNode ? this._currentNode.nodeRef : null;
var config = this.getUploadConfig();
var defaultConfig = {
destination: destination,
siteId: null,
containerId: null,
uploadDirectory: null,
updateNodeRef: null,
description: "",
overwrite: false,
thumbnails: "doclib",
username: null
};
var updatedConfig = lang.mixin(defaultConfig, config);
var walkFileSystem = lang.hitch(this, function alfresco_documentlibrary__AlfDndDocumentUploadMixin__onDndUploadDrop__walkFileSystem(directory, callback, error) {
callback.limit = this.dndMaxFileLimit;
callback.pending = callback.pending || 0;
callback.files = callback.files || [];
// get a dir reader and cleanup file path
var reader = directory.createReader(),
relativePath = directory.fullPath.replace(/^\//, "");
var repeatReader = function alfresco_documentlibrary__AlfDndDocumentUploadMixin__onDndUploadDrop__walkFileSystem__repeatReader() {
// about to start an async callback function
callback.pending++;
reader.readEntries(function alfresco_documentlibrary__AlfDndDocumentUploadMixin__onDndUploadDrop__walkFileSystem__repeatReader__readEntries(entries) {
// processing an async callback function
callback.pending--;
array.forEach(entries, function(entry) {
if (entry.isFile)
{
// about to start an async callback function
callback.pending++;
entry.file(function(File) {
// add the relativePath property to each file - this can be used to rebuild the contents of
// a nested tree folder structure if an appropriate API is available to do so
File.relativePath = relativePath;
callback.files.push(File);
if (callback.limit && callback.files.length > callback.limit)
{
throw new Error("Maximum dnd file limit reached: " + callback.limit);
}
// processing an async callback function
if (--callback.pending === 0)
{
// fall out here if last item processed is a file entry
callback(callback.files);
}
}, error);
}
else
{
walkFileSystem(entry, callback, error);
}
});
// the reader API is a little esoteric,from the MDN docs:
// "Continue calling readEntries() until an empty array is returned.
// You have to do this because the API might not return all entries in a single call."
if (entries.length !== 0)
{
repeatReader();
}
// fall out here if last item processed is a dir entry e.g. empty dir
if (callback.pending === 0)
{
callback(callback.files);
}
}, error);
};
repeatReader();
});
var addSelectedFiles = lang.hitch(this, function alfresco_documentlibrary__AlfDndDocumentUploadMixin__onDndUploadDrop__addSelectedFiles(files) {
if (this.dndMaxFileLimit && files.length > this.dndMaxFileLimit)
{
throw new Error("Maximum dnd file limit reached: " + this.dndMaxFileLimit);
}
// Check to see whether or not the generated upload configuration indicates
// that an existing node will be created or not. If node is being updated then
// we need to generate an intermediary step to capture version and comments...
if (updatedConfig.overwrite === false)
{
// Set up a response topic for receiving notifications that the upload has completed...
var responseTopic = this.generateUuid();
this._uploadSubHandle = this.alfSubscribe(responseTopic, lang.hitch(this, this.onFileUploadComplete), true);
this.alfPublish(topics.UPLOAD_REQUEST, {
alfResponseTopic: responseTopic,
files: files,
targetData: updatedConfig
}, true);
}
else
{
// TODO: Check that only one file has been dropped and issue error...
this.publishUpdateRequest(updatedConfig, files);
}
});
var items = evt.dataTransfer.items || [], firstEntry;
// webkitGetAsEntry is a marker for determining FileSystem API support.
// SHA-2164 - Firefox claims support, but different impl. rather than code around differences, fallback.
if (items[0] && items[0].webkitGetAsEntry && !has("ff") && (firstEntry = items[0].webkitGetAsEntry()))
{
walkFileSystem(firstEntry.filesystem.root, function(files) {
addSelectedFiles(files);
}, function() {
// fallback to standard way if error happens
addSelectedFiles(evt.dataTransfer.files);
}
);
}
else
{
// fallback to standard way if no support for filesystem API
addSelectedFiles(evt.dataTransfer.files);
}
}
else
{
this.alfLog("error", "A drop event was detected, but no files were present for upload: ", evt.dataTransfer);
}
}
catch(exception)
{
this.alfLog("error", "The following error occurred when files were dropped onto the Document List: ", exception);
}
// Remove the drag highlight...
this.removeDndHighlight();
// Destroy the overlay node (required for views that will re-render all the contents)...
domConstruct.destroy(this.dragAndDropOverlayNode);
this.dragAndDropOverlayNode = null;
evt.stopPropagation();
evt.preventDefault();
},
/**
* This function publishes an update version request. It will request that a new dialog
* be displayed containing the form controls defined in
* [widgetsForUpdate]{#link module:alfresco/documentlibrary/_AlfDndDocumentUploadMixin#widgetsForUpdate}.
*
* #instance
* #param {object} uploadConfig
*
* #fires ALF_CREATE_FORM_DIALOG_REQUEST
*/
publishUpdateRequest: function alfresco_documentlibrary__AlfDndDocumentUploadMixin__publishUpdateRequest(uploadConfig, files) {
// TODO: Work out the next minor and major increment versions...
// TODO: Localization required...
// Set up a response topic for receiving notifications that the upload has completed...
var responseTopic = this.generateUuid();
this._uploadSubHandle = this.alfSubscribe(responseTopic, lang.hitch(this, this.onFileUploadComplete), true);
// To avoid the issue with processing payloads containing files with native
// code in them, it is necessary to temporarily store the files in the data model...
var filesRef = this.generateUuid();
this.alfSetData(filesRef, files);
this.alfPublish("ALF_CREATE_FORM_DIALOG_REQUEST", {
dialogTitle: "Update",
dialogConfirmationButtonTitle: "Continue Update",
dialogCancellationButtonTitle: "Cancel",
formSubmissionTopic: topics.UPLOAD_REQUEST,
formSubmissionPayloadMixin: {
alfResponseTopic: responseTopic,
filesRefs: filesRef,
targetData: uploadConfig
},
fixedWidth: true,
widgets: lang.clone(this.widgetsForUpdate)
}, true);
},
Expect: All files that are dragged and drop to be uploaded.
Actual: Only one file is uploaded

Apply CSS Filters to cropped image and save/upload

I can already, input the image and crop it. I tried to apply CSS filters to it, but seems the CSS filters only apply on the img tag, not the actual image.
I am using both #Alyle-cropping and ngx-image-cropper(tests). Both give to me a base64 string for the cropped image. I am able to load the cropped image to the img tag and also upload it to the database.
onCropped(e: ImgCropperEvent) {
this.croppedImage = e.dataURL;
// console.log('cropped img: ', e.dataURL);
}
onloaded(e: ImgCropperEvent) {
this.imagemOriginal = e.originalDataURL;
this.cropper.center();
console.log('img loaded', e.name);
}
onerror(e: ImgCropperErrorEvent) {
console.warn(`'${e.name}' is not a valid image`, e);
}
// Aplicar Filtros /////////////////////////////////////////////////
change(crop: Crop): void {
this.stylus = crop.nome;
this.crops.forEach(function (value) {
(value.nome === crop.nome) ? value.ehSelec = true : value.ehSelec = false;
});
// const canvas = document.getElementById('cropping'), image = document.createElement('img');
// image.src = canvas.toDataURL('image/jpeg', 1.0);
// document.body.appendChild(image);
}
enviarParanue(): void {
const ref = firebase.storage().ref(`imagens/usuarios/idTeste`).child(`nomeTeste`);
const stringa = this.removerString(this.croppedImage);
ref.put(this.base64toBlob(stringa, 'image/png')).then((snapshot) => {
// console.log('snapshot', snapshot.valueOf());
ref.getDownloadURL().then(function(downloadURL) {
console.log('File available at', downloadURL);
});
});
// ref.putString(stringa, 'base64', {contentType: 'image/png'}).then((snapshot) => {
// // console.log('snapshot', snapshot.valueOf());
// ref.getDownloadURL().then(function(downloadURL) {
// console.log('File available at', downloadURL);
// });
// });
}
removerString(stringa: string): string {
return stringa.substring(23);
}
base64toBlob(base64Data: any, contentType: any) {
contentType = contentType || '';
const sliceSize = 1024;
const byteCharacters = atob(base64Data);
const bytesLength = byteCharacters.length;
const slicesCount = Math.ceil(bytesLength / sliceSize);
const byteArrays = new Array(slicesCount);
for (let sliceIndex = 0; sliceIndex < slicesCount; ++ sliceIndex) {
const begin = sliceIndex * sliceSize;
const end = Math.min(begin + sliceSize, bytesLength);
const bytes = new Array(end - begin);
for (let offset = begin, i = 0 ; offset < end; ++i, ++offset) {
bytes[i] = byteCharacters[offset].charCodeAt(0);
}
byteArrays[sliceIndex] = new Uint8Array(bytes);
}
return new Blob(byteArrays, { type: contentType });
}
EXAMPLE OF THE CSS FILTERS:
.none {filter:none;}
.blur {filter:blur(2.5px);}
.brightness {filter:brightness(200%);}
.contrast {filter:contrast(200%);}
.drop-shadow {filter:drop-shadow(8px 8px 10px gray);}
.grayscale {filter:grayscale(100%);}
.hue-rotate {filter:hue-rotate(90deg);}
.invert {filter:invert(100%);}
.opacity {filter:opacity(30%);}
.saturate {filter:saturate(8);}
.sepia {filter:sepia(100%);}
.contrast-brightness {filter:contrast(200%) brightness(150%);}
Problem is... I don't know how to apply the CSS filters to the image to upload the cropped with the effects(sepia, contrast, etc).
I tried to get the img src and convert it to Blob, but didn't work.
I ended up saving in the database a string with the name of the filter. So I apply the filter when I load the image. A good side of it, is that I can change the filter whenever I want.

Resources