Firestore triggers for Firebase Cloud Functions not working - firebase

Goal
My goal is to print the output of any created/updated/deleted documents in Cloud Firestore using Cloud Functions
Setup
I'm currently using the Firebase emulator for testing purposes. The Admin SDK is able to successfully connect to the Firestore database, but triggers won't be called. Here is my firebase.json for my project:
{
"firestore": {
"rules": "firestore.rules",
"indexes": "firestore.indexes.json"
},
"emulators": {
"auth": {
"port": 9099,
"host": "127.0.0.1"
},
"firestore": {
"port": 8080,
"host": "127.0.0.1"
},
"ui": {
"enabled": true
}
},
"functions": {
"source": "functions",
"predeploy": [
"npm --prefix \"$RESOURCE_DIR\" run lint"
]
}
}
All triggers are being loaded according to the—
Logs which reveal the following setup:
Emulator UI logging to ui-debug.log
auth function initialized.
auth function initialized.
firestore function initialized.
┌─────────────────────────────────────────────────────────────┐
│ ✔ All emulators ready! It is now safe to connect your app. │
│ i View Emulator UI at http://localhost:4000 │
└─────────────────────────────────────────────────────────────┘
┌────────────────┬────────────────┬─────────────────────────────────┐
│ Emulator │ Host:Port │ View in Emulator UI │
├────────────────┼────────────────┼─────────────────────────────────┤
│ Authentication │ 127.0.0.1:9099 │ http://localhost:4000/auth │
├────────────────┼────────────────┼─────────────────────────────────┤
│ Functions │ localhost:5001 │ http://localhost:4000/functions │
├────────────────┼────────────────┼─────────────────────────────────┤
│ Firestore │ 127.0.0.1:8080 │ http://localhost:4000/firestore │
└────────────────┴────────────────┴─────────────────────────────────┘
Emulator Hub running at localhost:4400
Other reserved ports: 4500
Firestore and Auth emulators are on 127.0.0.1 because setting it to the default "localhost" caused it to fail.
index.js
const functions = require("firebase-functions");
const admin = require("firebase-admin");
admin.initializeApp();
exports.newDoc = functions.firestore
.document("/{collection}/{docID}")
.onCreate((snapshot, context) => {
console.log(snapshot.data());
});
What I've tried
When any new records are added, even if the Admin SDK has added them, the documents are not read.
I've reset all security rules as they restricted read/write access.
I've tried the production environment instead of the emulator. I didn't expect this to work as the functions aren't deployed

It turns out, this is actually the correct setup for the JavaScript cloud function emulator. There was a problem in my environment where a restart was required.
For those having the same issue, I would try the same types of troubleshooting:
Try making the path use all wildcards to see if the path is the problem
To make sure it's not a problem with your current environment.

Related

Check if Firebase Emulator is connected from Node scripts

I am writing unit tests for my Firebase Functions and I want to automatically connect the functions, auth, storage, etc. emulators from my script without having to specify if I am testing in local environment or development environment.
Is there any way I can write a script to see if the Firebase Emulator is running on my local machine from an external node script?
For example, is there a way I can see processes running on specific local ports from a node script?
I tried using
import { exec } from "child_process";
const checkEmulator = exec("lsof -i:5000");
(I am using MacOS)
Then using the output to determine if the Firebase Functions Emulator is running on port 5000, but the output of the exec function does not make any sense to me.
Is there a more efficient way to check if the emulator is running on your local machine?
Thanks for any help!
You can use an HTTP Get request with something like curl:
curl localhost:4400/emulators
This will print out a JSON object listing all running emulators. Example:
{
"hub":{
"name": "hub",
"host": "localhost",
"port": 4400
},
"functions": {
"name": "functions",
"host": "localhost",
"port": 5001
}
"firestore": {
"name": "firestore",
"host": "localhost",
"port": 8080
}
}
Taken from https://firebase.google.com/docs/emulator-suite/install_and_configure
Alternatively, psaux can list active processes. Each emulator is a different process and is run from .../.cache/firebase/emulators.
https://www.npmjs.com/package/psaux

FlutterFire - Working with Functions emulator and actual android device

I am working in a flutter project and using my personal device for debugging. added, I need to use functions emulator for firebase, but i am keep getting functions unavailable error in flutter, when i try to call my function via emulator.
What I have...
firebase.json
"emulators": {
"auth": {
"host" : "0.0.0.0",
"port": 9099
},
"functions": {
"host" : "0.0.0.0",
"port": 5001
},}
in main.dart
FirebaseFunctions.instance.useFunctionsEmulator("172.20.10.7", 5001); // my system ip in local network
in pubspec.yaml
firebase_auth: ^3.3.6
cloud_firestore: ^3.1.7
cloud_functions: ^3.2.7
firebase_database: ^9.0.6
where 172.20.10.7 => this is my system ip in local network..
I am using Ubuntu 20.04.3 LTS, I don't know what should I do to overcome this.

Running multiple apps with PM2 from same repo

Some context:
I have a single repo (nuxt application) that is used to deploy to multiple apps/domains
all apps are on the same server
each app is in a separate folder with a clone of the repo, each folder is served on it's own domain (nginx)
each app has a different env file, the most important difference is a domain id (eg: DOMAIN_ID=1 etc..)
before build, I have a node script that does some setup work based on this DOMAIN_ID
I would like to use PM2 to:
use a single dir with the repo for all my domains
upon running pm2 deploy production I would like to be able to deploy all the domains, each domain should run it's setup script before doing the build
each domain should build in a subfolder so I can configure nginx to serve the app for a specific domain from it's folder
I tried to create an ecosystem file like so:
module.exports = {
apps: [
{
name: 'Demo1',
exec_mode: 'cluster',
instances: 'max',
script: './node_modules/nuxt/bin/nuxt.js',
args: 'start',
env: {
DOMAIN_ID: 1,
},
},
{
name: 'Demo2',
exec_mode: 'cluster',
instances: 'max',
script: './node_modules/nuxt/bin/nuxt.js',
args: 'start',
env: {
DOMAIN_ID: 2,
},
},
],
deploy: {
production: {
host: 'localhost',
user: 'root',
ref: 'origin/master',
repo: 'my_repo',
path: 'path_to_repo',
'post-setup': 'npm install && node setup.js',
'post-deploy': 'npm run build:setup && pm2 startOrRestart ecosystem.config.js --env production',
},
},
}
but it doesn't seem to work.
With the above ecosystem file the processes are created but when I access the domain for Demo1, pm2 serves randomly from Demo1 or Demo2 process.
There should be 2 dist folders somewhere, one for each of the apps.
I'm wondering if the above config is good and I'm just having an nginx issue or pm2 can't actually handle my use case.
To achieve what you're after, you'll need the following for each app:
A directory to serve your production build files from.
A node server instance, running on a unique port (eg. 3000, 3001).
A suitable nginx virtual host configuration for each app.
First, the build directories. The nuxt build script will look for a .nuxt.config.js and .env file in the srcDir (project root by default), produce the production build files for your app based on these files, and store the output at /.nuxt (again, by default). Passing the --config-file and --dotenv arguments to the nuxt build command allows you to point to different config and env files, thus enabling you to produce separate builds from a single repo.
Eg:
-- appRoot (the default srcDir, we'll change this in nuxt.config.js)
|__ node_modules
|__ package.json
|__ ...etc
|__ src
|__ commons
| |__ (shared configs, plugins, components, etc)
|__ app1
| |__ nuxt.config.js
| |__ ecosystem.config.js
| |__ .env
|__ app2
|__ nuxt.config.js
|__ ecosystem.config.js
|__ .env
For convenience, you could create the following scripts in package.json. To produce a build for app1, you run npm run app1:build.
"scripts": {
...
"app1:build": "nuxt build --config-file src/app1/nuxt.config.js --dotenv src/app1/.env",
"app2:build": "nuxt build --config-file src/app2/nuxt.config.js --dotenv src/app2/.env",
...
}
Now we're pointing our build scripts to individual app's nuxt.config.js files, we need to update those files and specify a srcDir and buildDir. The buildDir is where the output from each build command will be stored.
nuxt.config.js
...
srcDir: __dirname,
buildDir: '.nuxt/app1' (this path is relative to the project root)
...
That's it for building. For serving...
Each app needs a unique production server instance, running on it's own unique port. By default, nuxt start will launch a server listening on port 3000 and based on the nuxt.config.js file at the root of the project. As with the build command, we can pass arguments to change the default behaviour.
You could add the commands to package.json:
"scripts": {
...
"app1:build": "nuxt build --config-file src/app1/nuxt.config.js --dotenv src/app1/.env",
"app1:start": "nuxt start --config-file src/app1/nuxt.config.js --dotenv src/app1/.env -p=3000",
"app2:build": "nuxt build --config-file src/app2/nuxt.config.js --dotenv src/app2/.env",
"app2:start": "nuxt start --config-file src/app2/nuxt.config.js --dotenv src/app2/.env -p=3001",
...
}
By telling nuxt start to use app specific nuxt.config.jss, and having those point to unique buildDirs, we're telling our servers which directories to serve from for each app.
Important
Make sure when starting a production server you specify unique port numbers. Either add it to the start command (as above) or inside the nuxt.config.js file of each app.
Now you have unique builds, and unique server instances, you need to configure nginx to serve the correct app for each domain (I'm assuming you either know how to configure nginx to support virtual hosts, or someone on your team is handling it for you). Here's a stripped down config example:
server {
...
server_name app1.com;
root /path/to/appRoot;
...
location / {
...
proxy_pass http://localhost:3000;
...
}
...
}
Each app's nginx config can point to the same root- it's the proxy_pass that routes the request to the correct node server, which in turn knows which app to serve as we passed the appropriate arguments in with our nuxt start command.
PM2
I use PM2 to manage the node server instances, so a deployment script for the given example might look like:
Handle your version control/env files, and then...
cd /appRoot
npm install
npm run app1:build
pm2 reload src/app1/ecosystem.config.js
With app1's ecosystem.config.js files setup as so:
module.exports = {
apps: [
{
name: "app1",
exec_mode: "cluster",
instances: "max",
script: "./node_modules/nuxt/bin/nuxt.js",
args: "start --config-file src/app1/nuxt.config.js --dotenv src/app1/.env -p=3000"
}
]
}
Might need some tweaking to suit your needs, but I hope this helps!

Unable to access firebase emulator UI

I have the following firebase.json
{
"emulators": {
"firestore": {
"port": 8080
},
"ui": {
"enabled": true,
"host": "localhost",
"port": 4000
},
"database": {
"port": 9000
}
}
}
But I can't access localhost:4000.
My firebase-tools version is 8.4.3.
Running firebase emulators:start shows the following:
i emulators: Starting emulators: firestore, database
⚠ firestore: Did not find a Cloud Firestore rules file specified in a firebase.json config file.
⚠ firestore: The emulator will default to allowing all reads and writes. Learn more about this option: https://firebase.google.com/docs/emulator-suite/install_and_configure#security_rules_configuration.
i firestore: Firestore Emulator logging to firestore-debug.log
⚠ database: Did not find a Realtime Database rules file specified in a firebase.json config file. The emulator will default to allowing all reads and writes. Learn more about this option: https://firebase.google.com/docs/emulator-suite/install_and_configure#security_rules_configuration.
i database: Database Emulator logging to database-debug.log
┌──────────────────────────────────────────────────────────────┐
│ ✔ All emulators ready! It is now safe to connect your apps. │
└──────────────────────────────────────────────────────────────┘
┌───────────┬────────────────┐
│ Emulator │ Host:Port │
├───────────┼────────────────┤
│ Firestore │ localhost:8080 │
├───────────┼────────────────┤
│ Database │ localhost:9000 │
└───────────┴────────────────┘
Other reserved ports:
Thank you in appreciation.
I made a mistake of not choosing a default project.
With a chosen default project, the emulator ui came up.
In case anyone is using docker for the emulator and trying to access it from the host using port mappings, make sure to set the host property of the emulator UI config to 0.0.0.0. This goes for all other services you want to access from the host as well.
{
"emulators": {
"ui": {
"host": "0.0.0.0",
"port": 4000
},
// other services
}
Make sure in firebase.json , ui is set to true for emulators like below
"emulators": {
"pubsub": {
"port": 8085
},
"ui": {
"enabled": true
}, }
Then once you run emulator with "firebase emulator:start"
You will find
If you visit mentioned url in browser, you will find

Mupx deployment with Meteor.js fails when "Installing Docker"

I have Ubuntu 14.04, developing on it and want to have a test server on the same computer. It is ran in Virtual Box.
So I followed all the steps on Github for Mupx setup and watched the video that Meteor.js guide told me to watch it. When I get to command:
mupx setup
it shows me the screen with the error:
nejc#nejc-vb:~/Meteor Projects/CSGO/CSGO-deploy$ mupx setup
Meteor Up: Production Quality Meteor Deployments
------------------------------------------------
Configuration file : mup.json
Settings file : settings.json
“ Checkout Kadira!
It's the best way to monitor performance of your app.
Visit: https://kadira.io/mup ”
Started TaskList: Setup (linux)
[my_public_IP] - Installing Docker
events.js:72
throw er; // Unhandled 'error' event
^
Error: Timed out while waiting for handshake
at null._onTimeout (/usr/local/lib/node_modules/mupx/node_modules/nodemiral/node_modules/ssh2/lib/client.js:138:17)
at Timer.listOnTimeout [as ontimeout] (timers.js:121:15)
My mup.json file looks like this:
{
// Server authentication info
"servers": [
{
"host": "my_public_IP",
"username": "nejc",
"password": "123456",
// or pem file (ssh based authentication)
// WARNING: Keys protected by a passphrase are not supported
//"pem": "~/.ssh/id_rsa"
// Also, for non-standard ssh port use this
//"sshOptions": { "port" : 49154 },
// server specific environment variables
"env": {}
}
],
// Install MongoDB on the server. Does not destroy the local MongoDB on future setups
"setupMongo": true,
// Application name (no spaces).
"appName": "CSGO",
// Location of app (local directory). This can reference '~' as the users home directory.
// i.e., "app": "~/Meteor Projects/CSGO",
// This is the same as the line below.
"app": "/home/nejc/Meteor Projects/CSGO",
// Configure environment
// ROOT_URL must be set to your correct domain (https or http)
"env": {
"PORT": 80,
"ROOT_URL": "http://my_public_IP"
},
// Meteor Up checks if the app comes online just after the deployment.
// Before mup checks that, it will wait for the number of seconds configured below.
"deployCheckWaitTime": 30,
// show a progress bar while uploading.
// Make it false when you deploy using a CI box.
"enableUploadProgressBar": true
}

Resources