I am exporting my Firebase Functions so all functions should have the same region:
import * as firebaseFunctions from 'firebase-functions';
export const functions = firebaseFunctions.region(FIREBASE_REGIONS.europe);
export const scheduledDoSomething = functions.pubsub.schedule('every 1 minutes').onRun(doSomething);
However, when I export a function using the functions constant, the region is sometimes set to the US. The memory is also suddenly set to 2GB sometimes, even though the default is 256MB, making me have to explicitly set the .runWith({ memory: '256MB' }) property on the function. This is only for some functions (not all), and I have to specify some redundant properties on the function when exporting it:
// The region is already set on the *functions* object
export const scheduledDoSomething = functions.runWith({ memory: '256MB' }).region('europe-west1').pubsub.schedule('every every 1 minutes').onRun(() => doSomething());
Related
My function file has some code in the global context.
import * as functions from 'firebase-functions';
import { initBot } from './bot';
const bot = initBot(process.env.BOT_TOKEN ?? '');
functions.logger.debug('Setting webhook on cold start');
const adminConfig = JSON.parse(process.env.FIREBASE_CONFIG ?? '');
// eslint-disable-next-line #typescript-eslint/no-floating-promises
bot.telegram.setWebhook(
`https://us-central1-${adminConfig.projectId}.cloudfunctions.net/${process.env.K_SERVICE}`
);
// handle all telegram updates with HTTPs trigger
exports.bot = functions
.runWith({ secrets: ['BOT_TOKEN'], memory: '128MB' })
.https.onRequest(async (request, response) => {
...
As documentation states,
When a cold start occurs, the global context of the function is evaluated.
I want to use this feature to run some code on cold start. This approach is taken from the telegraf.js example.
However, when I'm trying to deploy the function, Firebase CLI evaluates the global context, and the code throws an error due to lack of environment variable.
Questions:
Why does CLI evaluate the global context which obviously leads to errors due to the lack of proper environment?
Is there any way to avoid this?
Is it a proper understanding of the recommendations on the above mentioned documentation page that developers are encouraged to do some cold-start computation in the global context of the function module? Are there any documented restrictions?
I have very simple cloud function truggers firestore document onCreate, when i deploy this function I'm getting Error
Functions deploy had errors with the following functions:
makeUppercase
In cloud logs
ERROR: error fetching storage source: generic::unknown: retry budget exhausted (3 attempts): fetching gcs source: unpacking source from gcs: source fetch container exited with non-zero status: 1
My function .ts
import * as functions from "firebase-functions";
exports.makeUppercase = functions.firestore.document("/clients/{documentId}")
.onCreate((snap, context) => {
// Grab the current value of what was written to Firestore.
const original = snap.data().original;
functions.logger.log("Uppercasing", context.params.documentId, original);
const uppercase = original.toUpperCase();
return snap.ref.set({uppercase}, {merge: true});
});
Also please check my firestore
In package JSON node version is 14. I don't know what went wrong, I have been trying this couple of hours and always same Error.
I've tried out the export data from Firebase to 'backup' my database but much to my surprise it didn't export the sub-collections, despite them being uniquely named. Is there a way to export the entire database?
I know there are NPM packages that can do this but it doesn't export to the binary format that the GCP exports to which is more space efficient.
EDIT:
I've tried to export the data via the scheduled cloud function and the GCP console. I followed the instructions exactly and didn't change anything.
They both uploaded a folder with the contents in the images below.
The root directory:
all_namespaces/all_kinds
I imported the 2020-11-05T14:40:04_75653.overall_export_metadata via the GCP console and sure enough all the top level collections and documents came back, but all the sub-collections were not there. I expected that all the collections and subcollections would be restored from the import.
So in summary I tried to export the data via two methods and upload via one.
Here's the scheduled cloud function:
const functions = require('firebase-functions');
const firestore = require('#google-cloud/firestore');
const client = new firestore.v1.FirestoreAdminClient();
// Replace BUCKET_NAME
const bucket = 'gs://BUCKET_NAME';
exports.scheduledFirestoreExport = functions.pubsub
.schedule('every 24 hours')
.onRun((context) => {
const projectId = process.env.GCP_PROJECT || process.env.GCLOUD_PROJECT;
const databaseName =
client.databasePath(projectId, '(default)');
return client.exportDocuments({
name: databaseName,
outputUriPrefix: bucket,
// Leave collectionIds empty to export all collections
// or set to a list of collection IDs to export,
// collectionIds: ['users', 'posts']
collectionIds: []
})
.then(responses => {
const response = responses[0];
console.log(`Operation Name: ${response['name']}`);
})
.catch(err => {
console.error(err);
throw new Error('Export operation failed');
});
});
So with the above code, I'd expect it to export everything in firestore. For example if I had the sub-collection:
orders/{order}/items
I'd expect that when I import the data via the GCP console, I get back the sub-collection listed above. However, I only get back the top-level. i.e.
orders/{order}
I'm using firebase functions with Node.js and I'm trying to create multiple environments for that. As far as I read I just need to create separate projects for that in Firebase, which I did.
I'm using Flamelink as well and I want to achieve the same. I actually have a Bonfire plan for Flamelink that allows multiple environments.
My concern is that the different environments in Flamelink write into the same database in Firebase separating it only with a flag of environment, so whenever I want to query something from the db I also have to specify my environment as well.
Is there a way to have different databases for different Flamelink environments with my setup, so I only specify the environment in my config and not in my queries?
Currently it is not possible to have a database per environment using Flamelink.
The only way to achieve this is to add both projects to Flamelink.
The Flamelink JS SDK can however be used within a cloud function and would alleviate some of the complexity working with multiple environments.
The Flamelink JS SDK takes in an environment parameter (along with some others, like locale and database type) when it is initialised, contextualising the use of the SDK methods with the environment.
import * as functions from 'firebase-functions';
import * as admin from 'firebase-admin';
import * as flamelink from 'flamelink/app';
import 'flamelink/content';
admin.initializeApp();
const firebaseApp = admin.app();
const flApp = flamelink({
firebaseApp,
dbType: 'cf',
env: 'staging',
locale: 'en-US',
});
export const testFunction = functions.https.onRequest(async(request, response) => {
if (request.query.env) {
flApp.settings.setEnvironment(request.query.env) // example 'production'
}
try {
const posts = await flApp.content.get({ schemaKey: 'blogPosts' })
res.status(200).json({ posts })
} catch (e) {
// handle error
}
});
Depending on your connected front-end framework/language you can pass in the environment using environment variables
JS client example
const env = (process.env.FLAMELINK_DATA_ENV || 'staging').toLowerCase()
await fetch(`https://yourhost.cloudfunctions.net/testFunction?env=${env}`)
I am currently exporting my Firestore data to a timestamped subdirectory in a bucket using the code below. It runs in a scheduled (firebase) cloud function.
import { client } from "../firebase-admin-client";
import { getExportCollectionList } from "./helpers";
export async function createFirestoreExport() {
const bucket = `gs://${process.env.GCLOUD_PROJECT}_backups/firestore`;
const timestamp = new Date().toISOString();
try {
const databaseName = client.databasePath(
process.env.GCLOUD_PROJECT,
"(default)"
);
const collectionsToBackup = await getExportCollectionList();
const responses = await client.exportDocuments({
name: databaseName,
outputUriPrefix: `${bucket}/${timestamp}`,
collectionIds: collectionsToBackup
});
const response = responses[0];
console.log(`Successfully scheduled export operation: ${response.name}`);
return response;
} catch (err) {
console.error(err);
throw new Error(`Export operation failed: ${err.message}`);
}
}
This has the advantage that you will get a list of exports each under its own timestamp. For example my script which imports the data to BigQuery will figure out the latest timestamp and load the data from there.
This approach has some drawbacks I think. There is no way to list directories in cloud storage. In order to figure out the latest timestamp I have to fetch a list of all objects (meaning all files of all exports) and then split and reduce these paths to extract a list of unique timestamps.
When keeping exports of months of data I can imagine this operation becomes quite inefficient.
So I'm thinking of storing the backups as versioned files, but I have no experience with this. If I understand correctly, with versioned files I can export the data each time to the same location, and (configurable) x versions of the files will persist in the bucket. You then have the ability to read any of available the versions if needed.
The bigquery import script could then load the data from the same location each time, automatically getting the latest export data.
So in my mind this would simplify things. Are there any drawbacks to using versioned files for backup data? Is this a recommended approach?