My project used #Nativescript/firebase(https://github.com/EddyVerbruggen/nativescript-plugin-firebase) ignores methods of firebase.firestore.timestamp, and returns undefined by properties.
The below is minimum reproduction
app.js
import Vue from "nativescript-vue";
import Home from "./components/Home";
var firebase = require("#nativescript/firebase").firebase;
firebase
.init({})
.then(
function () {
console.log("firebase.init done");
},
function (error) {
console.log("firebase.init error: " + error);
}
);
new Vue({
render: (h) => h("frame", [h(Home)]),
}).$start();
Home.vue
import { firebase } from "#nativescript/firebase";
export default {
computed: {
async message() {
const Ref = firebase.firestore
.collection("comments")
.doc("07bhQeWDf3u1j0B4vNwG");
const doc = await Ref.get();
const hoge = doc.data();
console.log("hoge.commented_at", hoge.commented_at); // CONSOLE LOG: hoge.commented_at Sat Oct 23 2021 22:44:48 GMT+0900 (JST)
console.log("hoge.commented_at.seconds", hoge.commented_at.seconds); // CONSOLE LOG: hoge.commented_at.seconds undefined
const hogeToDate = hoge.toDate();
console.log("hogeToDate", hogeToDate); // no console.log appear
return hogeToDate; // simulator shows "object Promise"
},
},
};
I also tried const hogeTimestampNow = firebase.firestore.Timestamp.now(); then no console.log appear...
Environment
vue.js
Node.js v14.17.6
nativescript v8.1.2
nativescript-vue v2.9.0
#nativescript/firebase v11.1.3
If you dive into the source of #nativescript/firebase, in particular looking at /src/firebase-common.ts, you can see that firebase is a custom implementation and not the object/namespace normally exported by the ordinary Firebase Web SDK.
It uses a custom implementation so that it can be transformed depending on the platform the code is running on as shown in /src/firebase.android.ts and /src/firebase.ios.ts.
Of particular importance, is that Firestore's Timestamp objects are internally converted to JavaScript Date objects when exposed to your code as each platform has its own version of a Timestamp object. Because the exposed JavaScript Date object doesn't have a seconds property, you get undefined when attempting to access hoge.commented_at.seconds.
The equivalent of Timestamp#seconds would be Math.floor(hoge.commented_at / 1000) (you could also be more explicit with Math.floor(hoge.commented_at.getTime() / 1000) if you don't like relying on JavaScript's type coercion).
function getSeconds(dt: Date) {
return Math.floor(dt.getTime() / 1000)
}
While you can import the Timestamp object from the Modular Web SDK (v9+), when passed into the NativeScript plugin, it would be turned into an ordinary object (i.e. { seconds: number, nanoseconds: number } rather than a Timestamp).
import { Timestamp } from 'firebase/firestore/lite';
const commentedAtTS = Timestamp.fromDate(hoge.commented_at);
docRef.set({ commentedAt: commentedAtTS.toDate() }) // must turn back to Date object before writing!
firebase.firestore.timestamp does not work via #nativescript/firebase as #samthecodingman said.(https://stackoverflow.com/a/69853638/15966408)
Just use ordinally javascript methods and edit.
I tried
get timestamp from firestore then convert to milliseconds
get date with new Date() then convert to milliseconds
and same miliseconds logged.
via firestore
const Ref = firebase.firestore.collection("comments").doc("07bhQeWDf3u1j0B4vNwG");
const doc = await Ref.get();
const hoge = doc.data();
console.log("hoge.commented_at in milliseconds: ", Math.floor(hoge.commented_at / 1000));
// CONSOLE LOG: hoge.commented_at in milliseconds: 1634996688
via javascript methods
const getNewDate = new Date("October 23, 2021, 22:44:48 GMT+0900");
// same as hoge.commented_at
console.log("getNewDate in milliseconds: ", getNewDate.getTime() / 1000);
// CONSOLE LOG: getNewDate in milliseconds: 1634996688
Related
so I am building an nft marketplace and everything is good with creating nft .. etc but when attempting to buy an nft I get this error :
ethers.umd.js?e6ac:4395 Uncaught (in promise) TypeError: Cannot read properties of undefined (reading '_hex')
Here is the code snippet:
const buyNft = async (nft) => {
const web3Modal = new Web3Modal();
const connection = await web3Modal.connect();
const provider = new ethers.providers.Web3Provider(connection);
const signer = provider.getSigner();
const contract = new ethers.Contract(
MarketAddress,
MarketAddressABI,
signer
);
const price = ethers.utils.parseUnits(nft.price.toString(), "ether")
console.log(price)
// code stops here
const transaction = await contract.createMarketSale(nft.tokenId, {
value: price,
});
await transaction.wait();
};
when debugging seems that the code stops before the transaction constant.
when console .log the price I get this:
I tried to remove the toString method, also tried to spread the price object in transaction variable like this value:{...price} but still didn't work
The createMarketSale() first agument expects a BigNumber instance (or a stringified number that it would convert to BigNumber).
When you pass it undefined, it throws the error mentioned in your question.
Solution: Make sure that your nft.tokenId is either a BigNumber or string - not undefined.
Hello I am getting an error trying to deploy a function on firebase and it is bothering me because it worked in the past and now that I wanted to deploy the same code it is giving me the error above.
Can someone have a look because I checked the documentation thinking that something might change and the names of the attributes or something are not the same but the function seems 100% sound based on the documentation.
Kind regards and kudos to everyone.
Much respect if someone manages to give me a hint. I will add the log files also.
Code :
const functions = require("firebase-functions");
const axios = require("axios");
const admin = require("firebase-admin");
admin.initializeApp();
const database = admin.firestore();
const page = 1;
const fiat = "RON";
const tradeType = "BUY";
const asset = "USDT";
const payTypes = ["ING"];
let finalData = [];
let tempDataBeforeProccessing = [];
const baseObj = {
page,
rows: 20,
publisherType: null,
asset,
tradeType,
fiat,
payTypes,
};
const stringData = JSON.stringify(baseObj);
const getTheData = async function() {
tempDataBeforeProccessing=[];
await axios.post("https://p2p.binance.com/bapi/c2c/v2/friendly/c2c/adv/search", baseObj, {
hostname: "p2p.binance.com",
port: 443,
path: "/bapi/c2c/v2/friendly/c2c/adv/search",
method: "POST",
headers: {
"Content-Type": "application/json",
"Content-Length": stringData.length,
},
}).then((res)=>{
tempDataBeforeProccessing=res.data.data;
});
};
const processData = function() {
finalData=[];
let obj = [];
for (let i = 0; i < tempDataBeforeProccessing.length; i++) {
let payTypesz = "";
for (let y = 0; y <
tempDataBeforeProccessing[i]["adv"]["tradeMethods"].length; y++) {
payTypesz +=
tempDataBeforeProccessing[i]["adv"]["tradeMethods"][y]["identifier"];
if (y <
tempDataBeforeProccessing[i]["adv"]["tradeMethods"].length - 1) {
payTypesz += ", ";
}
}
obj = {
tradeType: tempDataBeforeProccessing[i]["adv"]["tradeType"],
asset: tempDataBeforeProccessing[i]["adv"]["asset"],
fiatUnit: tempDataBeforeProccessing[i]["adv"]["fiatUnit"],
price: tempDataBeforeProccessing[i]["adv"]["price"],
surplusAmount:
tempDataBeforeProccessing[i]["adv"]["surplusAmount"],
maxSingleTransAmount:
tempDataBeforeProccessing[i]["adv"]["maxSingleTransAmount"],
minSingleTransAmount:
tempDataBeforeProccessing[i]["adv"]["minSingleTransAmount"],
nickName:
tempDataBeforeProccessing[i]["advertiser"]["nickName"],
monthOrderCount:
tempDataBeforeProccessing[i]["advertiser"]["monthOrderCount"],
monthFinishRate:
tempDataBeforeProccessing[i]["advertiser"]["monthFinishRate"],
payTypes: payTypesz,
};
finalData.push(obj);
}
console.log(finalData);
};
const entireCall = async function() {
await getTheData();
processData();
};
exports.scheduledFunction = functions.pubsub
.schedule("every 1 minutes")
.onRun(async (context) => {
await database.collection("SebiBinanceSale").doc("BCR Bank").delete();
await entireCall();
for (let i = 0; i < finalData.length; i++) {
await database.collection("SebiBinanceSale").doc("BCR Bank")
.collection("1").doc(i.toString())
.set({
"tradeType": finalData[i]["tradeType"],
"asset": finalData[i]["asset"],
"fiatUnit": finalData[i]["fiatUnit"],
"price": finalData[i]["price"],
"surplusAmount": finalData[i]["surplusAmount"],
"maxSingleTransAmount": finalData[i]["maxSingleTransAmount"],
"minSingleTransAmount": finalData[i]["minSingleTransAmount"],
"nickName": finalData[i]["nickName"],
"monthOrderCount": finalData[i]["monthOrderCount"],
"monthFinishRate": finalData[i]["monthFinishRate"],
"payTypes": finalData[i]["payTypes"],
});
}
return console.log("Succes Upload of the data ");
});
error:
Function failed on loading user code. This is likely due to a bug in the user code. Error message: Error: please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation.
Functions deploy had errors with the following functions:
scheduledFunction(us-central1)
i functions: cleaning up build files...
Error: There was an error deploying functions
ivanoiualexandrupaul#Ivanoius-MacBook-Pro functions %
log file :
[debug] [2022-10-29T17:40:16.776Z] Error: Failed to update function scheduledFunction in region us-central1
at /usr/local/lib/node_modules/firebase-tools/lib/deploy/functions/release/fabricator.js:41:11
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async Fabricator.updateV1Function (/usr/local/lib/node_modules/firebase-tools/lib/deploy/functions/release/fabricator.js:305:32)
at async Fabricator.updateEndpoint (/usr/local/lib/node_modules/firebase-tools/lib/deploy/functions/release/fabricator.js:140:13)
at async handle (/usr/local/lib/node_modules/firebase-tools/lib/deploy/functions/release/fabricator.js:78:17)
[error]
[error] Error: There was an error deploying functions
When you are using scheduled functions in Firebase Functions, an App Engine instance is created that is needed for Cloud Scheduler to work. You can read about it here.They use the location that has been set by default for resources. I think that you are getting this error because there is a difference between the default GCP resource location you specified and the region of your scheduled cloud function.Check your Cloud Scheduler function details and see which region it has been deployed to. By default, functions run in the us-central1 region. Check this link to see how we can change the region of the function.
You can also try re installation using the command
npm install -g firebase-tools
Also check if any lock files are generated and delete these and run the firebase deploy --only functions again and see if that works.
I'm newbie to Firestore. Firestore docs says...
Important: Unlike "push IDs" in the Firebase Realtime Database, Cloud Firestore auto-generated IDs do not provide any automatic ordering. If you want to be able to order your documents by creation date, you should store a timestamp as a field in the documents.
Reference: https://firebase.google.com/docs/firestore/manage-data/add-data
So do I have to create key name as timestamp in document? Or created is suffice to fulfill above statement from Firestore documentation.
{
"created": 1534183990,
"modified": 1534183990,
"timestamp":1534183990
}
firebase.firestore.FieldValue.serverTimestamp()
Whatever you want to call it is fine afaik. Then you can use orderByChild('created').
I also mostly use firebase.database.ServerValue.TIMESTAMP when setting time
ref.child(key).set({
id: itemId,
content: itemContent,
user: uid,
created: firebase.database.ServerValue.TIMESTAMP
})
Use firestore Timestamp class, firebase.firestore.Timestamp.now().
Since firebase.firestore.FieldValue.serverTimestamp() does not work with add method from firestore. Reference
For Firestore
ref.doc(key).set({
created: firebase.firestore.FieldValue.serverTimestamp()
})
REALTIME SERVER TIMESTAMP USING FIRESTORE
import firebase from "firebase/app";
const someFunctionToUploadProduct = () => {
firebase.firestore().collection("products").add({
name: name,
price : price,
color : color,
weight :weight,
size : size,
createdAt : firebase.firestore.FieldValue.serverTimestamp()
})
.then(function(docRef) {
console.log("Document written with ID: ", docRef.id);
})
.catch(function(error) {
console.error("Error adding document: ", error);
});
}
All you need is to import 'firebase' and then call
firebase.firestore.FieldValue.serverTimestamp() wherever you need it. Be careful with the spelling though, its "serverTimestamp()". In this example it provides the timestamp value to 'createdAt' when uploading to the firestore's product's collection.
That's correct, like most database, Firestore doesn't store creation times. In order to sort objects by time:
Option 1: Create timestamp on client (correctness not guaranteed):
db.collection("messages").doc().set({
....
createdAt: firebase.firestore.Timestamp.now()
})
The big caveat here is that Timestamp.now()uses the local machine time. Therefore, if this is run on a client machine, you have no guarantee the timestamp is accurate. If you're setting this on the server or if guaranteed order isn't so important, it might be fine.
Option 2: Use a timestamp sentinel:
db.collection("messages").doc().set({
....
createdAt: firebase.firestore.FieldValue.serverTimestamp()
})
A timestamp sentinel is a token that tells the firestore server to set the time server side on first write.
If you read the sentinel before it is written (e.g., in a listener) it will be NULL unless you read the document like this:
doc.data({ serverTimestamps: 'estimate' })
Set up your query with something like this:
// quick and dirty way, but uses local machine time
const midnight = new Date(firebase.firestore.Timestamp.now().toDate().setHours(0, 0, 0, 0));
const todaysMessages = firebase
.firestore()
.collection(`users/${user.id}/messages`)
.orderBy('createdAt', 'desc')
.where('createdAt', '>=', midnight);
Note that this query uses the local machine time (Timestamp.now()). If it's really important that your app uses the correct time on the clients, you could utilize this feature of Firebase's Realtime Database:
const serverTimeOffset = (await firebase.database().ref('/.info/serverTimeOffset').once('value')).val();
const midnightServerMilliseconds = new Date(serverTimeOffset + Date.now()).setHours(0, 0, 0, 0);
const midnightServer = new Date(midnightServerMilliseconds);
The documentation isn't suggesting the names of any of your fields. The part you're quoting is just saying two things:
The automatically generated document IDs for Firestore don't have a natural time-based ordering like they did in Realtime Database.
If you want time-based ordering, store a timestamp in the document, and use that to order your queries. (You can call it whatever you want.)
This solution worked for me:
Firestore.instance.collection("collectionName").add({'created': Timestamp.now()});
The result in Cloud Firestore is:
Cloud Firestore Result
Try this one for Swift 4 Timestamp(date: Date())
let docData: [String: Any] = [
"stringExample": "Hello world!",
"booleanExample": true,
"numberExample": 3.14159265,
"dateExample": Timestamp(Date()),
"arrayExample": [5, true, "hello"],
"nullExample": NSNull(),
"objectExample": [
"a": 5,
"b": [
"nested": "foo"
]
]
]
db.collection("data").document("one").setData(docData) { err in
if let err = err {
print("Error writing document: \(err)")
} else {
print("Document successfully written!")
}
}
The way it worked with me, is just taking the timestamp from the snapshot parameter snapshot.updateTime
exports.newUserCreated = functions.firestore.document('users/{userId}').onCreate(async (snapshot, context) => {
console.log('started! v1.7');
const userID = context.params['userId'];
firestore.collection(`users/${userID}/lists`).add({
'created_time': snapshot.updateTime,
'name':'Products I ♥',
}).then(documentReference => {
console.log("initial public list created");
return null;
}).catch(error => {
console.error('Error creating initial list', error);
process.exit(1);
});
});
I am using Firestore to store data that comes from a Raspberry PI with Python. The pipeline is like this:
Raspberry PI (Python using paho-mqtt) -> Google Cloud IoT -> Google Cloud Pub/Sub -> Firebase Functions -> Firestore.
Data in the device is a Python Dictionary. I convert that to JSON.
The problem I had was that paho-mqtt will only send (publish) data as String and one of the fields of my data is timestamp. This timestamp is saved from the device because it accurately says when the measurement was taken regardless on when the data is ultimately stored in the database.
When I send my JSON structure, Firestore will store my field 'timestamp' as String. This is not convenient. So here is the solution.
I do a conversion in the Cloud Function that is triggered by the Pub/Sub to write into Firestore using Moment library to convert.
Note: I am getting the timestamp in python with:
currenttime = datetime.datetime.utcnow()
var moment = require('moment'); // require Moment
function toTimestamp(strDate){
return parsedTime = moment(strDate, "YYYY-MM-DD HH:mm:ss:SS");
}
exports.myFunctionPubSub = functions.pubsub.topic('my-topic-name').onPublish((message, context) => {
let parsedMessage = null;
try {
parsedMessage = message.json;
// Convert timestamp string to timestamp object
parsedMessage.date = toTimestamp(parsedMessage.date);
// Get the Device ID from the message. Useful when you have multiple IoT devices
deviceID = parsedMessage._deviceID;
let addDoc = db.collection('MyDevices')
.doc(deviceID)
.collection('DeviceData')
.add(parsedMessage)
.then ( (ref) => {
console.log('Added document ID: ', ref.id);
return null;
}).catch ( (error) => {
console.error('Failed to write database', error);
return null;
});
} catch (e) {
console.error('PubSub message was not JSON', e);
}
// // Expected return or a warning will be triggered in the Firebase Function logs.
return null;
});
Firestone method does not work. Use Timestamp from java.sql.Timestamp and don't cast to string.. Then firestone formats it properly. For example to mark a now() use:
val timestamp = Timestamp(System.currentTimeMillis())
multiple ways to store time in Firestore
firebaseAdmin.firestore.FieldValue.serverTimestamp() method. The actual timestamp will be computed when the doc is written to the Firestore.
while storing it looks like this:
firebaseAdmin.firestore.Timestamp.now() method.
while storing it looks like this:
For both the methods, next time you fetch data it will return Firestore Timestamp object:
So, you first need to convert it to native js Date object and then you can perform methods on it like toISOString().
export function FStimestampToDate(
timestamp:
| FirebaseFirestore.Timestamp
| FirebaseFirestore.FieldValue
): Date {
return (timestamp as FirebaseFirestore.Timestamp).toDate();
}
Store as unix timestamp Date.now, it'll be stored as number i.e. 1627235565028 but you won't be able to see it as readable Date in firestore db.
To query on this Firestore field, you need to convert the date to timestamp and then query.
Store as new Date().toISOString() i.e. "2021-07-25T17:56:40.373Z" but you won't be able to perform date range query on this.
I prefer the 2nd or 3rd way.
According to the docs, you can "set a field in your document to a server timestamp which tracks when the server receives the update".
Example:
import { updateDoc, serverTimestamp } from "firebase/firestore";
const docRef = doc(db, 'objects', 'some-id');
// Update the timestamp field with the value from the server
const updateTimestamp = await updateDoc(docRef, {
timestamp: serverTimestamp() // this does the trick!
});
Sharing what worked for me after googling for 2 hours, for firebase 9+
import { serverTimestamp } from "firebase/firestore";
export const postData = ({ name, points }: any) => {
const scoresRef = collection(db, "scores");
return addDoc(scoresRef, {
name,
points
date: serverTimestamp(),
});
};
Swift 5.1
...
"dateExample": Timestamp(date: Date()),
...
The newest version from Firestore you should use it as follow
import { doc, setDoc, Timestamp } from "firebase/firestore";
const docData = {
...
dateExample: Timestamp.fromDate(new Date("December 10, 1815"))
};
await setDoc(doc(db, "data", "one"), docData);
or for sever timestamp
import { updateDoc, serverTimestamp } from "firebase/firestore";
const docRef = doc(db, 'objects', 'some-id');
const updateTimestamp = await updateDoc(docRef, {
timestamp: serverTimestamp()
});
am playing around with downloading and serving mp3 files in Meteor.
I am trying to download an MP3 file (https://www.sample-videos.com/audio/mp3/crowd-cheering.mp3) on my MeteorJS Server side (to circumvent CORS issues) and then pass it back to the client to play it in a AUDIO tag.
In Meteor you use the Meteor.call function to call a server method. There is not much to configure, it's just a method call and a callback.
When I run the method I receive this:
content:
"ID3���#K `�)�<H� e0�)������1������J}��e����2L����������fȹ\�CO��ȹ'�����}$A�Lݓ����3D/����fijw��+�LF�$?��`R�l�YA:A��#�0��pq����4�.W"�P���2.Iƭ5��_I�d7d����L��p0��0A��cA�xc��ٲR�BL8䝠4���T��..etc..", data:null,
headers: {
accept-ranges:"bytes",
connection:"close",
content-length:"443926",
content-type:"audio/mpeg",
date:"Mon, 20 Aug 2018 13:36:11 GMT",
last-modified:"Fri, 17 Jun 2016 18:16:53 GMT",
server:"Apache",
statusCode:200
which is the working Mp3 file (the content-length is exactly the same as the file I write to disk on the MeteorJS Server side, and it is playable).
However, my following code doesn't let me convert the response into a BLOB:
```
MeteorObservable.call( 'episode.download', episode.url.url ).subscribe( ( result: any )=> {
console.log( 'response', result);
let URL = window.URL;
let blob = new Blob([ result.content ], {type: 'audio/mpeg'} );
console.log('blob', blob);
let audioUrl = URL.createObjectURL(blob);
let audioElement:any = document.getElementsByTagName('audio')[0];
audioElement.setAttribute("src", audioUrl);
audioElement.play();
})
When I run the code, the Blob has the wrong size and is not playable
Blob(769806) {size: 769806, type: "audio/mpeg"}
size:769806
type:"audio/mpeg"
__proto__:Blob
Uncaught (in promise) DOMException: Failed to load because no supported source was found.
On the backend I just run a return HTTP.get( url ); in the method which is using import { HTTP } from 'meteor/http'.
I have been trying to use btoa or atob but that doesn't work and as far as I know it is already a base64 encoded file, right?
I am not sure why the Blob constructor creates a larger file then the source returned from the backend. And I am not sure why it is not playing.
Can anyone point me to the right direction?
Finally found a solution that uses request instead of Meteor's HTTP:
First you need to install request and request-promise-native in order to make it easy to return your result to clients.
$ meteor npm install --save request request-promise-native
Now you just return the promise of the request in a Meteor method:
server/request.js
import { Meteor } from 'meteor/meteor'
import request from 'request-promise-native'
Meteor.methods({
getAudio (url) {
return request.get({url, encoding: null})
}
})
Notice the encoding: null flag, which causes the result to be binary. I found this in a comment of an answer related to downloading binary data via node. This causes not to use string but binary representation of the data (I don't know how but maybe it is a fallback that uses Node Buffer).
Now it gets interesting. On your client you wont receive a complex result anymore but either an Error or a Uint8Array which makes sense because Meteor uses EJSON to send data over the wires with DDP and the representation of binary data is a Uint8Array as described in the documentation.
Because you can just pass in a Uint8Array into a Blob you can now easily create the blob like so:
const blob = new Blob([utf8Array], {type: 'audio/mpeg'})
Summarizing all this into a small template if could look like this:
client/fetch.html
<template name="fetch">
<button id="fetchbutton">Fetch Mp3</button>
{{#if source}}
<audio id="player" src={{source}} preload="none" content="audio/mpeg" controls></audio>
{{/if}}
</template>
client/fetch.js
import { Template } from 'meteor/templating'
import { ReactiveVar } from 'meteor/reactive-var'
import './fetch.html'
Template.fetch.onCreated(function helloOnCreated () {
// counter starts at 0
this.source = new ReactiveVar(null)
})
Template.fetch.helpers({
source () {
return Template.instance().source.get()
},
})
Template.fetch.events({
'click #fetchbutton' (event, instance) {
Meteor.call('getAudio', 'https://www.sample-videos.com/audio/mp3/crowd-cheering.mp3', (err, uint8Array) => {
const blob = new Blob([uint8Array], {type: 'audio/mpeg'})
instance.source.set(window.URL.createObjectURL(blob))
})
},
})
Alternative solution is adding a REST endpoint *using Express) to your Meteor backend.
Instead of HTTP we use request and request-progress to send the data chunked in case of large files.
On the frontend I catch the chunks using https://angular.io/guide/http#listening-to-progress-events to show a loader and deal with the response.
I could listen to the download via
this.http.get( 'the URL to a mp3', { responseType: 'arraybuffer'} ).subscribe( ( res:any ) => {
var blob = new Blob( [res], { type: 'audio/mpeg' });
var url= window.URL.createObjectURL(blob);
window.open(url);
} );
The above example doesn't show progress by the way, you need to implement the progress-events as explained in the angular article. Happy to update the example to my final code when finished.
The Express setup on the Meteor Server:
/*
Source:http://www.mhurwi.com/meteor-with-express/
## api.class.ts
*/
import { WebApp } from 'meteor/webapp';
const express = require('express');
const trackRoute = express.Router();
const request = require('request');
const progress = require('request-progress');
export function api() {
const app = express();
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});
app.use('/episodes', trackRoute);
trackRoute.get('/:url', (req, res) => {
res.set('content-type', 'audio/mp3');
res.set('accept-ranges', 'bytes');
// The options argument is optional so you can omit it
progress(request(req.params.url ), {
// throttle: 2000, // Throttle the progress event to 2000ms, defaults to 1000ms
// delay: 1000, // Only start to emit after 1000ms delay, defaults to 0ms
// lengthHeader: 'x-transfer-length' // Length header to use, defaults to content-length
})
.on('progress', function (state) {
// The state is an object that looks like this:
// {
// percent: 0.5, // Overall percent (between 0 to 1)
// speed: 554732, // The download speed in bytes/sec
// size: {
// total: 90044871, // The total payload size in bytes
// transferred: 27610959 // The transferred payload size in bytes
// },
// time: {
// elapsed: 36.235, // The total elapsed seconds since the start (3 decimals)
// remaining: 81.403 // The remaining seconds to finish (3 decimals)
// }
// }
console.log('progress', state);
})
.on('error', function (err) {
// Do something with err
})
.on('end', function () {
console.log('DONE');
// Do something after request finishes
})
.pipe(res);
});
WebApp.connectHandlers.use(app);
}
and then add this to your meteor startup:
import { Meteor } from 'meteor/meteor';
import { api } from './imports/lib/api.class';
Meteor.startup( () => {
api();
});
TL;DR;
Does anyone know if it's possible to use console.log in a Firebase/Google Cloud Function to log entries to Stack Driver using the jsonPayload property so my logs are searchable (currently anything I pass to console.log gets stringified into textPayload).
I have a multi-module project with some code running on Firebase Cloud Functions, and some running in other environments like Google Compute Engine. Simplifying things a little, I essentially have a 'core' module, and then I deploy the 'cloud-functions' module to Cloud Functions, 'backend-service' to GCE, which all depend on 'core' etc.
I'm using bunyan for logging throughout my 'core' module, and when deployed to GCE the logger is configured using '#google-cloud/logging-bunyan' so my logs go to Stack Driver.
Aside: Using this configuration in Google Cloud Functions is causing issues with Error: Endpoint read failed which I think is due to functions not going cold and trying to reuse dead connections, but I'm not 100% sure what the real cause is.
So now I'm trying to log using console.log(arg) where arg is an object, not a string. I want this object to appear in Stack Driver under the jsonPayload but it's being stringified and put into the textPayload field.
It took me awhile, but I finally came across this example in firebase functions samples repository. In the end I settled on something a bit like this:
const Logging = require('#google-cloud/logging');
const logging = new Logging();
const log = logging.log('my-func-logger');
const logMetadata = {
resource: {
type: 'cloud_function',
labels: {
function_name: process.env.FUNCTION_NAME ,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
},
},
};
const logData = { id: 1, score: 100 };
const entry = log.entry(logMetaData, logData);
log.write(entry)
You can add a string severity property value to logMetaData (e.g. "INFO" or "ERROR"). Here is the list of possible values.
Update for available node 10 env vars. These seem to do the trick:
labels: {
function_name: process.env.FUNCTION_TARGET,
project: process.env.GCP_PROJECT,
region: JSON.parse(process.env.FIREBASE_CONFIG).locationId
}
UPDATE: Looks like for Node 10 runtimes they want you to set env values explicitly during deploy. I guess there has been a grace period in place because my deployed functions are still working.
I ran into the same problem, and as stated by comments on #wtk's answer, I would like to add replicating all of the default cloud function logging behavior I could find in the snippet below, including execution_id.
At least for using Cloud Functions with the HTTP Trigger option the following produced correct logs for me. I have not tested for Firebase Cloud Functions
// global
const { Logging } = require("#google-cloud/logging");
const logging = new Logging();
const Log = logging.log("cloudfunctions.googleapis.com%2Fcloud-functions");
const LogMetadata = {
severity: "INFO",
type: "cloud_function",
labels: {
function_name: process.env.FUNCTION_NAME,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
}
};
// per request
const data = { foo: "bar" };
const traceId = req.get("x-cloud-trace-context").split("/")[0];
const metadata = {
...LogMetadata,
severity: 'INFO',
trace: `projects/${process.env.GCLOUD_PROJECT}/traces/${traceId}`,
labels: {
execution_id: req.get("function-execution-id")
}
};
Log.write(Log.entry(metadata, data));
The github link in #wtk's answer should be updated to:
https://github.com/firebase/functions-samples/blob/2f678fb933e416fed9be93e290ae79f5ea463a2b/stripe/functions/index.js#L103
As it refers to the repository as of when the question was answered, and has the following function in it:
// To keep on top of errors, we should raise a verbose error report with Stackdriver rather
// than simply relying on console.error. This will calculate users affected + send you email
// alerts, if you've opted into receiving them.
// [START reporterror]
function reportError(err, context = {}) {
// This is the name of the StackDriver log stream that will receive the log
// entry. This name can be any valid log stream name, but must contain "err"
// in order for the error to be picked up by StackDriver Error Reporting.
const logName = 'errors';
const log = logging.log(logName);
// https://cloud.google.com/logging/docs/api/ref_v2beta1/rest/v2beta1/MonitoredResource
const metadata = {
resource: {
type: 'cloud_function',
labels: {function_name: process.env.FUNCTION_NAME},
},
};
// https://cloud.google.com/error-reporting/reference/rest/v1beta1/ErrorEvent
const errorEvent = {
message: err.stack,
serviceContext: {
service: process.env.FUNCTION_NAME,
resourceType: 'cloud_function',
},
context: context,
};
// Write the error log entry
return new Promise((resolve, reject) => {
log.write(log.entry(metadata, errorEvent), (error) => {
if (error) {
return reject(error);
}
resolve();
});
});
}
// [END reporterror]