I have a Firebase child node with about 15,000,000 child objects with a total size of about 8 GB of data.
exampele data structure:
firebase.com/childNode/$pushKey
each $pushKey contains a small flat dictionary:
{a: 1.0, b: 2.0, c: 3.0}
I would like to delete this data as efficiently and easy as possible. How?
What i Tried:
My first try was a put request:
PUT firebase.com/childNode.json?auth=FIRE_SECRET
data-raw: null
response: {
"error": "Data requested exceeds the maximum size that can be accessed with a single request. Contact support#firebase.com for help."
}
So that didn't work, let's do a limit request:
PUT firebase.com/childNode.json?auth=FIRE_SECRET&orderBy="$key"&limitToFirst=100
data-raw: null
response: {
"error": "Querying related parameters not supported on this request type"
}
No luck so far :( What about writing a script that will get the first X number of keys and then create a patch request with each value set to null?
GET firebase.com/childNode.json?auth=FIRE_SECRET&shallow=true&orderBy="$key"&limitToLast=100
{
"error" : "Mixing 'shallow' and querying parameters is not supported"
}
It's really not going to be easy this one? I could remove the shallow requirement and get the keys, and finish the script. I was just hoping there would be a easier/more efficient way???
Another thing i tried were to create a node script that listen for childAdded and then directly tries to remove those children?
ref.authWithCustomToken(AUTH_TOKEN, function(error, authData) {
if (error) {console.log("Login Failed!", error)}
if (!error) {console.log("Login Succeeded!", authData)}
ref.child("childNode").on("child_added", function(snap) {
console.log(`found: ${snap.key()}`)
ref.child("childNode").child(snap.key()).remove( function(err) {
if (!err) {console.log(`deleted: ${snap.key()}`)}
})
})
})
This script actually hangs right now, but earlier I did receive somethings like a max stack limit warning from firebase. I know this is not a firebase problem, but I don't see any particular easy way to solve that problem.
Downloading a shallow tree, will download only the keys. So instead of asking the server to order and limit, you can download all keys.
Then you can order and limit it client-side, and send delete requests to Firebase in batches.
You can use this script for inspiration: https://gist.github.com/wilhuff/b78e7391396e09f6c614
Use firebase cli tool for this: firebase database:remove --project .
In Browser Console this is fastest way
database.ref('data').limitToFirst(10000).once('value', snap => {
var updates = {};
snap.forEach(snap => {
updates[snap.key] = null;
});
database.ref('data').update(updates);
});
Related
My question is, how can I delete a users analytics data from firebase using "Google User Deletion API" and its method: userdeletionRequests:upsert? This is important for me to fully fulfill GDPR.
I tried searching for this, but didn't a solution for using it in combination with "NodeJS" and "firebase-cloud-functions".
My biggest problem is, how I get the access, token, this is what I have for now:
const accessToken = (await admin.credential.applicationDefault().getAccessToken()).access_token;
return ky.post(constants.googleUserDeletionURL, {
body: JSON.stringify({
kind: "analytics#userDeletionRequest",
id: {
type: "USER_ID",
userId: uid,
},
propertyId: constants.googleUserDeletionPropertyId,
}),
headers: {
"Authorization": `Bearer ${accessToken}`,
},
}).catch((err) => {
functions.logger.log(`An Error happened trying to delete user-anayltics ${(err as Error).message}`);
});
But I always get An Error happened trying to delete user-anayltics Request failed with status code 403 Forbidden
Okay, after some painful and long days (literally took me like >20 hours), I've figured out how to achieve this request. Here is a step for step guide:
Step 1 - Needed Dependencies
To send a post-request to google, we need a http-client-library. I've choosen "ky", so we need to install it first with:
npm install ky
Furthermore, we need to create or OWN oAuth2 token, otherwise the post request will be denied with "error 403". To create our own oAuth token, we need another dependency:
npm install #googleapis/docs
Step 2 - Needed Google Property ID
With your request, Google needs to know which property-id / project you are targeting (where it should later delete the user-analytics-data). To get this property-id, we need to log in into GCP via Firebase (not the "real" GCP, the one via Firebase).
For this, go into your "console.firebase.google.com" → Select your project → Analytics Dashboard → "View more in Google Analytics (button at the right corner)"
Write "property-id" into the search field and then save it somewhere (we need it later)
Step 3 - Creating Client-Secret
The third step is to create a service-account, which we will later add into our functions-folder, in order to create our oAuthClient (don't worry, you will see what I mean to a later point)
To create your new service.json, log in to google cloud platform via "https://cloud.google.com/" and then follow the pictures:
FIRST:
SECOND:
THIRD:
FOURTH:
FIFTH
Step 4 - Download JSON
After we created our "oAuth-Deleter service-account", we need to manually download the needed JSON, so we can paste it into our functions-folder.
For this, select "oauth-deleter#your-domain.iam.gserviceaccount.com" under "Service Account"
Then click on "Keys" and "Add key", which will automagically download you a service-json (SELECT Key type → JSON → Create).
Step 5 - Paste JSON file into your functions-folder
To loosen up the mood a bit, here is an easy step. Paste the downloaded JSON-File into your functions-folder.
Step 6 - Grant Access to our new created oAuth-Delelter-Account
Creating the service-account and giving it access in the normal GCP is not enough for Google, so we also have to give it access in our Firebase project. For this, go back into your "GCP via Firebase (see Step 2)" → Click on Setting → "User Access for Account" → Click on the "plus"
Then click on "Add user" and write the email we copied before into the email field (the email from Step 3, Picture FOURTH "Service-Account ID). In our case, it is "oAuth-Deleter#your-domain.iam.gserviceaccount.com". Furthermore, it needs admin-access:
Step 6 - The code
Now, after these million unnecessary but necessary steps, we get to the final part. THE DAMN CODE. I've written this in typescript with "compilerOptions" → "module": "esnext", "target": "esnext". But I am sure that you are smart enough to change the code after completing this many steps :)
import admin from "firebase-admin";
import functions from "firebase-functions";
import ky from "ky";
import docs from "#googleapis/docs";
import { UserRecord } from "firebase-functions/v1/auth";
export const dbUserOnDeleted = functions.
.auth
.user()
.onDelete((user) => doOnDeletedUser(user))
----------------------------
export asnc function doOnDeletedUser/user: UserRecord) {
try {
const googleDeletionURL = "https://www.googleapis.com/analytics/v3/userDeletion/userDeletionRequests:upsert"
// Step 1: Paste propertyID: (See Step 2)
const copiedPropertyID = "12345678"
// Step 2: Create oAuthClient
const oAuthClient = new docs.auth.GoogleAuth({
keyFile: "NAME-OF-THE-FILE-YOU-COPIED-INTO-YOUR-FUNCTIONS-FOLDER",
scopes: ["https://www.googleapis.com/auth/analytics.user.deletion"]
});
// Step 3: Get user uid you want to delete from user-analytics
const uid = user.uid
// Step 4: Generate access token
// (this is the reason why we needed the 5 steps before this)
// yup, a single line of code
const accessToken = await oAuthClient.getAccessToken() as string;
// Step 5: Make the request to google and delete the user
return ky.post(googleDeletionURL, {
body: JSON.stringify({
kind: "analytics#userDeletionRequest",
id: {
type: "USER_ID",
userid: uid
},
propertyId: copiedPropertyID
}),
headers: {
"Authorization": "Bearer " + accessToken,
}
});
} catch (err) {
functions.logger.error(`Something bad happened, ${(err as Error).message)`
}
}
Afterthoughts
This was and probably will be my longest post at stack overflow forever. I have to say that it was a pain in the a** to get this thing to working. The amount of work and setup that is needed to simply delete a data from one endpoint is just ridiculous. Google, please fix.
I am trying to use RTK Query mutations to upload a file to the API. Here is my mutation code:
addBanner: builder.mutation({
query(body) {
return {
url: `/api/banners`,
method: 'POST',
body,
}
},
})
Here is how I generate the data for request.
const [addBanner, { isBannerLoading }] = useAddBannerMutation();
const new_banner = new FormData();
new_banner.append("file", my_file);
new_banner.append("type", my_type);
new_banner.append("title", my_title);
addBanner(new_banner).unwrap().then( () => ...
But I get an error:
A non-serializable value was detected in the state, in the path: `api.mutations.L-Mje7bYDfyNCC4NcxFD3.originalArgs.file`...
I know I can disable non-serializable check entirely through middleware, but I don't think it is an appropriate way of using Redux Toolkit and RTK. Without a file all works fine. Is there any right way of uploading files with RTK?
Edit: This has been fixed with #reduxjs/toolkit 1.6.1 - please update your package
I just opened an issue for this: https://github.com/reduxjs/redux-toolkit/issues/1239 - thanks for bringing it up!
For now, you'll probably have to disable that check (you can do so for a certain path in the state while keeping it for the rest with the ignoredPath option).
I am writing a node.js skill using ask-sdk and using alexa-skill-local to test the endpoint. I need to persist data to DynamoDb in one of the handler. But I keep getting "missing region error". Please find my code below:
'use strict';
// use 'ask-sdk' if standard SDK module is installed
const Alexa = require('ask-sdk');
const { launchRequestHandler, HelpIntentHandler, CancelAndStopIntentHandler, SessionEndedRequestHandler } = require('./commonHandlers');
const ErrorHandler = {
canHandle() {
return true;
},
handle(handlerInput, error) {
return handlerInput.responseBuilder
.speak('Sorry, I can\'t understand the command. Please say again.')
.reprompt('Sorry, I can\'t understand the command. Please say again.')
.getResponse();
},
};
////////////////////////////////
// Code for the handlers here //
////////////////////////////////
exports.handler = Alexa.SkillBuilders
.standard()
.addRequestHandlers(
launchRequestHandler,
HelpIntentHandler,
CancelAndStopIntentHandler,
SessionEndedRequestHandler,
ErrorHandler
)
.withTableName('devtable')
.withDynamoDbClient()
.lambda();
And in one of the handler I am trying to get persisted attributes like below:
handlerInput.attributesManager.getPersistentAttributes().then((data) => {
console.log('--- the attributes are ----', data)
})
But I keep getting the following error:
(node:12528) UnhandledPromiseRejectionWarning: AskSdk.DynamoDbPersistenceAdapter Error: Could not read item (amzn1.ask.account.AHJECJ7DTOPSTT25R36BZKKET4TKTCGZ7HJWEJEBWTX6YYTLG5SJVLZH5QH257NFKHXLIG7KREDKWO4D4N36IT6GUHT3PNJ4QPOUE4FHT2OYNXHO6Z77FUGHH3EVAH3I2KG6OAFLV2HSO3VMDQTKNX4OVWBWUGJ7NP3F6JHRLWKF2F6BTWND7GSF7OVQM25YBH5H723VO123ABC) from table (EucerinSkinCareDev): Missing region in config
at Object.createAskSdkError (E:\projects\nodejs-alexa-sdk-v2-eucerin-skincare-dev\node_modules\ask-sdk-dynamodb-persistence-adapter\dist\utils\AskSdkUtils.js:22:17)
at DynamoDbPersistenceAdapter.<anonymous> (E:\projects\nodejs-alexa-sdk-v2-eucerin-skincare-dev\node_modules\ask-sdk-dynamodb-persistence-adapter\dist\attributes\persistence\DynamoDbPersistenceAdapter.js:121:45)
Can we read and write attributes from DynamoDb using alexa-skill-local ? Do we need some different setup to achieve this ?
Thanks
I know that this is a really old topic, but I had the same problem few days ago, and I'm gonna explain how I did it work.
You have to download DynamoDB Locally and follow the instructions from here
Once that you have configure your local DynamoDB and check that it is working. You have to pass it through your code, to DynamoDbPersistenceAdapter constructor.
Your code should look similar to:
var awsSdk = require('aws-sdk');
var myDynamoDB = new awsSdk.DynamoDB({
endpoint: 'http://localhost:8000', // If you change the default url, change it here
accessKeyId: <your-access-key-id>,
secretAccessKey: <your-secret-access-key>,
region: <your-region>,
apiVersion: 'latest'
});
const {DynamoDbPersistenceAdapter} = require('ask-sdk-dynamodb-persistence-adapter');
return new DynamoDbPersistenceAdapter({
tableName: tableName || 'my-table-name',
createTable: true,
dynamoDBClient: myDynamoDB
});
Where <your-acces-key-id>, <your-secrete-access-key> and <your-region> are defined at aws config and credentials files.
The next step is launch your server with alexa-skill-local command as always.
Hope this will be helpfull! =)
Presumably you have an AWS config profile that your skill is using when running locally.
You need to edit the .config file and set the default region (ie us-east-1) there. The region should match the region where your table exists.
Alternatively, if you want to be able to run completely isolated, you may need to write come conditional logic and swap the dynamo client with one targeting an instance of DynamoDB Local running on your machine.
Since this morning, our Firebase application has a problem when writing data to the Realtime Database instance. Even the simplest task, such as adding one key-value pair to an object triggers
Error: TRIGGER_PAYLOAD_TOO_LARGE: This request would cause a function payload exceeding the maximum size allowed.
It is especially strange since nothing in our code or database has changed for more than 24 hours.
Even something as simple as
Database.ref('environments/' + envkey).child('orders/' + orderkey).ref.set({a:1})
triggers the error.
Apperently, the size of the payload is not the problem, but what could be causing this?
Database structure, as requested
environments
+-env1
+-env2
--+orders
---+223344
-----customer: "Peters"
-----country: "NL"
-----+items
------item1
-------code: "a"
-------value: "b"
------item2
-------code: "x"
-------value: "2"
Ok I figured this out. The issue is not related to your write function, but to one of the cloud functions the write action would trigger.
For example, we have a structure like:
/collections/data/abcd/items/a
in JSON:
"collections": {
"data": {
"abc": {
"name": "example Col",
"itemCount": 5,
"items": {
"a": {"name": "a"},
"b": {"name": "b"},
"c": {"name": "c"},
"d": {"name": "d"},
"e": {"name": "e"},
}
}
}
}
Any write into an item was failing at all whatsoever. API, Javascript, even a basic write in the console.
I decided to look at our cloud functions and found this:
const countItems = (collectionId) => {
return firebaseAdmin.database().ref(`/collections/data/${collectionId}/items`).once('value')
.then(snapshot => {
const items = snapshot.val();
const filtered = Object.keys(items).filter(key => {
const item = items[key];
return (item && !item.trash);
});
return firebaseAdmin.database().ref(`/collections/meta/${collectionId}/itemsCount`)
.set(filtered.length);
});
};
export const onCollectionItemAdd = functions.database.ref('/collections/data/{collectionId}/items/{itemId}')
.onCreate((change, context) => {
const { collectionId } = context.params;
return countItems(collectionId);
});
On it's own it's nothing, but that trigger reads for ALL items and by default firebase cloud functions send's the entire snapshot to the CF even if we don't use it. In Fact it sends the previous and after values too, so if you (like us) have a TON of items at that point my guess it the payload that it tries to send to the cloud function is way too big.
I removed the count functions from our CF and boom, back to normal. Not sure the "correct" way to do the count if we can't have the trigger at all, but I'll update this if we do...
The TRIGGER_PAYLOAD_TOO_LARGE error is part of a new feature Firebase is rolling out, where our existing RTDB limits are being strictly enforced. The reason for the change is to make sure that we aren't silently dropping any Cloud Functions triggers, since any event exceeding those limits can't be sent to Functions.
You can turn this feature off yourself by making this REST call:
curl -X PUT -d "false" https://<namespace>.firebaseio.com/.settings/strictTriggerValidation/.json?auth\=<SECRET>
Where <SECRET> is your DB secret
Note that if you disable this, the requests that are currently failing may go through, but any Cloud Functions you have that trigger on the requests exceeding our limits will fail to run. If you are using database triggers for your functions, I would recommend you re-structure your requests so that they stay within the limits.
I think I'm missing something. I have read a lot of posts/examples and I can't save images on my system (I work locally).
What is my goal ?
I'm trying to save a file submitted by the user in a folder (server-side). Does it sound easy ? Maybe.
What's the issue ?
Short answer : I can't figure out how to save the file in my folder.
Do you want more information ?
The story of a file upload
I have read that to use the path parameter like new FS.Store.FileSystem("thumb", { path: "/public/images/user/avatar" }) , I have to declare my collection server-side. But when I call Avatars.insert() (Avatars is the name of my collection), it seems like it doesn't exists. This makes sense because this collection exists only on the server.
So I've tried to declare the collection both server-side and client-side (I've read some examples about that) and that works ! The file is correctly added to MongoDB, but my folder is still empty (I'm not sure but I think this is because Avatars.insert() is called client-side so the collection used is the client-side one, the one which cannot take path parameter).
But no problem ! I've created 2 Meteor methods (one client-side and one server-side) called "updateAvatarFile". With this "trick", I'm able to do Meteor.call("updateAvatarFile", field.files[0]), which calls both server-side and client-side methods. So I can do some UI stuff in the client-side one and upload the file in the other. But I can't pass the file as a parameter.
field.files[0] contains the file client-side but server-side it's an empty object. My question is : How can I upload a file ?
I can't do it client-side (because I can't use path parameter) but I can pass the file to the server. I'm sure that I'm missing something but I can't figure what.
Here is how I go :
// /client/views/templates/settings.js
Template.settings.events({
'submit #updateAvatar': function (e, template) {
e.preventDefault();
const field = document.getElementsByName('avatar')[0];
Meteor.call('updateAvatarFile', field.files[0]);
}
});
// /client/lib/clientMethods.js
Meteor.methods({
'updateAvatarFile': function (file) {
// blabla
}
});
// /server/lib/serverMethods.js
Meteor.methods({
'updateAvatarFile': function (file) {
Avatars.insert(file, function (err, fileObj) {
if (err) {
console.log(err);
} else {
console.log(fileObj);
}
});
}
});
// /server/collections/serverAvatarCollection.js
Avatars = new FS.Collection("avatars", {
stores: [
new FS.Store.FileSystem("original", { path: "/public/images/user/avatar" }),
new FS.Store.FileSystem("thumb", { path: "/public/images/user/avatar" })
],
filter: {
maxSize: 1000000, //1Mo
allow: { contentTypes: ['image/*'] }
},
onInvalid: function (message) {
//throw new Meteor.Error(403, message);
}
});
// /client/collections/clientAvatarCollection.js
// (this one is actually in a comment block)
Avatars = new FS.Collection("avatars", {
stores: [
new FS.Store.FileSystem("original"),
new FS.Store.FileSystem("thumb")
],
filter: {
maxSize: 1000000, //1Mo
allow: { contentTypes: ['image/*'] }
},
onInvalid: function (message) {
alert(message);
}
});
I've also tried to insert the file with the client-side method but I've got the same result (the file is added to MongoDB but not saved into a folder).
Using different path values didn't work either.
EDIT : Or maybe I'm trying to use the wrong package ? To my mind, transform a picture to chunks and save them into MongoDB sounds really weird and bad. Do you have any adivces ?
EDIT 2 :
answer to Michel Floyd (sorry about that, the character limit is annoying).
First, thanks for your answer !
1. At the moment, I'm just trying Meteor so I have both autopublish and insecure installed. Not publishing/subscribing to my collection cannot cause an issue, is that right ?
2. Before your answer I've tried to set up a collection available for both server and client by putting my avatarCollection.js in /collections. I was thinking that path which doesn't contains server or client are automatically available for the two sides. So what is the difference between /collections and /lib ? (I know that all files in a "lib" folder are loaded first). Is it a bad practice to put collections in /collections ? Maybe should I create a /lib/collections folder ?
3. (the last point, sorry for the long comment) I've tried what you advised above but it doesn't seems to work (or I am doing something wrong, again ><). When I use Avatars.insert(), CollectionFS don't save the file on my local storage. I've also checked the root of my HDD in case CollectionFS interpreted / to be the root of my machine but it doesn't. In the other hand, CollectionFS have created 4 collections in MongoDB (cfs._tempstore.chunks, cfs.avatars.filerecord, cfs_gridfs._tempstore.chunks and cfs_gridfs._tempstore.files) - the gridfs is weird. I have GridFS installed but I use FileSystem -. Those tables are not empty. That's why I think CollectionFS split my file into chunks and save them in MongoDB.
You're generally on the right track. CollectionFS uses storage adapters to deal with actual file storage. You can put files on S3, gridFS, or your local file system as you're trying to do. Putting the file contents in Mongo directly is usually avoided.
Firstly, define your collection:
Avatars = new FS.Collection("avatars", {
stores: [
new FS.Store.FileSystem("original", { path: "/public/images/user/avatar" }),
new FS.Store.FileSystem("thumb", { path: "/public/images/user/avatar" })
],
filter: {
maxSize: 1000000, //1Mo
allow: { contentTypes: ['image/*'] }
},
onInvalid: function (message) {
//throw new Meteor.Error(403, message);
}
});
in /lib! This will make it available to both the server and the client.
Secondly, make sure you publish your avatars collection from the server and subscribe to it from the client. I don't see any publish/subscribe code in your question. You need it.
Thirdly, if you just do:
Avatars.insert(...);
on the client with a file then CollectionFS then CollectionFS will take care of storing it for you. The thing is, it won't be instantly available. It can take a little while for the actual upload and storage to happen. You can look at fileObj.isUploaded for example to see if the file is ready.