Here's the starter code provided with a new GCP Cloud Function:
/**
* Responds to any HTTP request that can provide a "message" field in the body.
*
* #param {!Object} req Cloud Function request context.
* #param {!Object} res Cloud Function response context.
*/
exports.helloWorld = function helloWorld(req, res) {
// Example input: {"message": "Hello!"}
if (req.body.message === undefined) {
// This is an error case, as "message" is required.
res.status(400).send('No message defined!');
} else {
// Everything is okay.
console.log(req.body.message);
res.status(200).send('Success: ' + req.body.message);
}
};
... and the package.json:
{
"name": "sample-http",
"version": "0.0.1"
}
Looking for a basic example of calling DataStore from here.
I'm not a Node.js user, but based on the documentation I think one convenient way would be to use the Node.js Cloud Datastore Client Library. The example from that page:
// Imports the Google Cloud client library
const Datastore = require('#google-cloud/datastore');
// Your Google Cloud Platform project ID
const projectId = 'YOUR_PROJECT_ID';
// Instantiates a client
const datastore = Datastore({
projectId: projectId
});
// The kind for the new entity
const kind = 'Task';
// The name/ID for the new entity
const name = 'sampletask1';
// The Cloud Datastore key for the new entity
const taskKey = datastore.key([kind, name]);
// Prepares the new entity
const task = {
key: taskKey,
data: {
description: 'Buy milk'
}
};
// Saves the entity
datastore.save(task)
.then(() => {
console.log(`Saved ${task.key.name}: ${task.data.description}`);
})
.catch((err) => {
console.error('ERROR:', err);
});
But you may want to take a look at Client Libraries Explained as well, as it describes or points to detailed pages about other options as well, some of which one might find preferable.
you need to include the DataStore dependency in the package.json
{
"name": "sample-http",
"dependencies": {
"#google-cloud/datastore": "1.3.4"
},
"version": "0.0.1"
}
Related
Goal: To monitor the Lambda Throttles with CloudWatch Alarms. Lambdas are built in the CLI using amplify add function. The code below was implemented following the Amplify Documentation, running amplify add custom and then CDK.
Code:
/* AWS CDK code goes here - learn more: https://docs.aws.amazon.com/cdk/latest/guide/home.html */
import * as cdk from "#aws-cdk/core";
import * as AmplifyHelpers from "#aws-amplify/cli-extensibility-helper";
import { AmplifyDependentResourcesAttributes } from "../../types/amplify-dependent-resources-ref";
import { Alarm } from "#aws-cdk/aws-cloudwatch";
import { lambdaThrottles } from "./alarm-props";
import { ComparisonOperator, AlarmProps } from '#aws-cdk/aws-cloudwatch';
export class cdkStack extends cdk.Stack {
private readonly ApplicationToExpiredApplicationTableAlarm: Alarm;
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps, amplifyResourceProps?: AmplifyHelpers.AmplifyResourceProps) {
super(scope, id, props);
// 🔽 Do not remove - Amplify CLI automatically injects the current deployment environment in this input parameter
new cdk.CfnParameter(this, "env", { type: "String", description: "Current Amplify CLI env name" });
// 🔽 Obtaining the project information
const amplifyProjectInfo = AmplifyHelpers.getProjectInfo();
const environment = cdk.Fn.ref('env');
// 🔽 Obtaining the Lambda Functions from Amplify
const dependencies: AmplifyDependentResourcesAttributes = AmplifyHelpers.addResourceDependency(this, amplifyResourceProps.category, amplifyResourceProps.resourceName, [
{ category: "function", resourceName: "ApplicationToExpiredApplicationTable" },
]);
// 🔽 Attempting to access the Lambda Function directly
const ApplicationToExpiredApplicationTable = dependencies.function["ApplicationToExpiredApplicationTable"];
// 🔽 Defining Alarm Props
const lambdaThrottles = (lambdaFunction: any): AlarmProps => {
return {
comparisonOperator: ComparisonOperator.GREATER_THAN_OR_EQUAL_TO_THRESHOLD,
threshold: 1,
evaluationPeriods: 1,
metric: lambdaFunction.metricThrottles({ label : lambdaFunction.functionName}),
actionsEnabled: true,
alarmDescription: `This lambda has throttled ${lambdaFunction.functionName}`,
alarmName: `${lambdaFunction.functionMae}-Throttles`
}
};
// 🔽 Building the Alarms
this.ApplicationToExpiredApplicationTableAlarm = new Alarm(this, `${environment}-ApplicationToExpiredApplicationTableAlarm-${amplifyProjectInfo.projectName}`, lambdaThrottles(ApplicationToExpiredApplicationTable))
}
}
Current Error:
I cannot access the function like I would be able to with the CDK Function, and thus cannot write lambdaFunction.metricThrottles because it is not available.
I understand how to get the Arn or the Name following the Amplify Documentation, but this still does not allude to accessing the function itself, just properties of it.
I'm a little bit lost on how firebase functions work with authentication,
Suppose I have a function that pulls 100 documents and sets the cache header for 24 hours.
res.set('Cache-Control', 'public, max-age=0, s-maxage=86400' // 24 * 60 * 60
By default, does that apply to all users or is it cached per user? There's some instances where the 100 documents are unique to the user - while other functions where the 100 documents are available to any user that's authenticated.
I see in the docs that you can set a __session which implies it's for individual users data, however there isn't much documentation for how to set that (or where). Is it set by default?
My goal is to have a function that requires the user be authenticated, then return 100 documents from a non-user specific collection - aka not have to read 100 documents per user. However, I don't think thats feasible because it would need to check if each user is authorized (not cacheable). So is there a way to just make a publicly available cache?
Any light that can be shared on this is greatly appreciated!
The Cache-Control header is used to instruct a user's browser and any CDN edge server on how to cache the request.
For requests requiring authentication, making use of the CDN is not really possible for this as you should be using Cache-Control: private for these responses (the default for Cloud Functions).
While you could check that your users are authenticated and then redirect them to a publically cached resource (like https://example.com/api/docs?sig=<somesignature>), this URL would still be accessible if someone got hold of that URL/cached data.
Arguably the best approach would be to store your "cached" responses in a single Cloud Firestore document (if it is less than 1MB in size and is JSON-compatible) or store it in Cloud Storage.
The code included below is an example of how you could do this with a Cloud Firestore cache. I've used posts where the authenticated user is the author as an example, but for this specific use case, you would be better off using the Firebase SDK to make such a query (realtime updates, finer control, query API). A similar approach could be applied for "all user" resources.
If attempting to cache HTML or some other not JSON friendly format, I would recommend changing the caching layer to Cloud Storage. Instead of storing the post's data in the cache entry, store the path and bucket to the cached file in storage (like below). Then if it hasn't expired, get a stream of that file from storage and pipe it through to the client.
{
data: {
fullPath: `/_serverCache/apiCache/${uid}/posts.html`,
bucket: "myBucket"
},
/* ... */
}
Common Example Code
import functions from "firebase-functions";
import { HttpsError } from "firebase-functions/lib/providers/https";
import admin from "firebase-admin";
import hash from "object-hash";
admin.initializeApp();
interface AttachmentData {
/** May contain a URL to the resource */
url?: string;
/** May contain Base64 encoded data of resource */
data?: string;
/** Type of this resource */
type: "image" | "video" | "social" | "web";
}
interface PostData {
author: string;
title: string;
content: string;
attachments: Record<string, AttachmentData>;
postId: string;
}
interface CacheEntry<T = admin.firestore.DocumentData> {
/** Time data was cached, as a Cloud Firestore Timestamp object */
cachedAt: admin.firestore.Timestamp;
/** Time data was cached, as a Cloud Firestore Timestamp object */
expiresAt: admin.firestore.Timestamp;
/** The ETag signature of the cached resource */
eTag: string;
/** The cached resource */
data: T;
}
/**
* Returns posts authored by this user as an array, from Firestore
*/
async function getLivePostsForAuthor(uid: string) {
// fetch the data
const posts = await admin.firestore()
.collection('posts')
.where('author', '==', uid)
.limit(100)
.get();
// flatten the results into an array, including the post's document ID in the data
const results: PostData[] = [];
posts.forEach((postDoc) => {
results.push({ postId: postDoc.id, ...postDoc.data() } as PostData);
});
return results;
}
/**
* Returns posts authored by this user as an array, caching the result from Firestore
*/
async function getCachedPostsForAuthor(uid: string) {
// Get the reference to the data's location
const cachedPostsRef = admin.firestore()
.doc(`_server/apiCache/${uid}/posts`) as admin.firestore.DocumentReference<CacheEntry<PostData[]>>;
// Get the cache entry's data
const cachedPostsSnapshot = await cachedPostsRef.get();
if (cachedPostsSnapshot.exists) {
// get the expiresAt property on it's own
// this allows us to skip processing the entire document until needed
const expiresAt = cachedPostsSnapshot.get("expiresAt") as CacheEntry["expiresAt"] | undefined;
if (expiresAt !== undefined && expiresAt.toMillis() > Date.now() - 60000) {
// return the entire cache entry as-is
return cachedPostsSnapshot.data()!;
}
}
// if here, the cache entry doesn't exist or has expired
// get the live results from Firestore
const results = await getLivePostsForAuthor(uid);
// etag, cachedAt and expiresAt are used for the HTTP cache-related headers
// only expiresAt is used when determining expiry
const cacheEntry: CacheEntry<PostData[]> = {
data: results,
eTag: hash(results),
cachedAt: admin.firestore.Timestamp.now(),
// set expiry as 1 day from now
expiresAt: admin.firestore.Timestamp.fromMillis(Date.now() + 86400000),
};
// save the cached data and it's metadata for future calls
await cachedPostsRef.set(cacheEntry);
// return the cached data
return cacheEntry;
}
HTTPS Request Function
This is the request type you would use for serving Cloud Functions behind Firebase Hosting. Unfortunately the implementation details aren't as straightforward as using a Callable Function (see below) but is provided as an official project sample. You will need to insert validateFirebaseIdToken() from that example for this code to work.
import express from "express";
import cookieParserLib from "cookie-parser";
import corsLib from "cors";
interface AuthenticatedRequest extends express.Request {
user: admin.auth.DecodedIdToken
}
const cookieParser = cookieParserLib();
const cors = corsLib({origin: true});
const app = express();
// insert from https://github.com/firebase/functions-samples/blob/2531d6d1bd6b16927acbe3ec54d40369ce7488a6/authorized-https-endpoint/functions/index.js#L26-L69
const validateFirebaseIdToken = /* ... */
app.use(cors);
app.use(cookieParser);
app.use(validateFirebaseIdToken);
app.get('/', async (req, res) => {
// if here, user has already been validated, decoded and attached as req.user
const user = (req as AuthenticatedRequest).user;
try {
const cacheEntry = await getCachedPostsForAuthor(user.uid);
// set caching headers
res
.header("Cache-Control", "private")
.header("ETag", cacheEntry.eTag)
.header("Expires", cacheEntry.expiresAt.toDate().toUTCString());
if (req.header("If-None-Match") === cacheEntry.eTag) {
// cached data is the same, just return empty 304 response
res.status(304).send();
} else {
// send the data back to the client as JSON
res.json(cacheEntry.data);
}
} catch (err) {
if (err instanceof HttpsError) {
throw err;
} else {
throw new HttpsError("unknown", err && err.message, err);
}
}
});
export const getMyPosts = functions.https.onRequest(app);
Callable HTTPS Function
If you are making use of the client SDKs, you can also request the cached data using Callable Functions.
This allows you to export the function like this:
export const getMyPosts = functions.https.onCall(async (data, context) => {
if (!context.auth) {
throw new functions.https.HttpsError(
'failed-precondition',
'The function must be called while authenticated.'
);
}
try {
const cacheEntry = await getCachedPostsForAuthor(context.auth.uid);
return cacheEntry.data;
} catch (err) {
if (err instanceof HttpsError) {
throw err;
} else {
throw new HttpsError("unknown", err && err.message, err);
}
}
});
and call it from the client using:
const getMyPosts = firebase.functions().httpsCallable('getMyPosts');
getMyPosts()
.then((postsArray) => {
// do something
})
.catch((error) => {
// handle errors
})
TL;DR;
Does anyone know if it's possible to use console.log in a Firebase/Google Cloud Function to log entries to Stack Driver using the jsonPayload property so my logs are searchable (currently anything I pass to console.log gets stringified into textPayload).
I have a multi-module project with some code running on Firebase Cloud Functions, and some running in other environments like Google Compute Engine. Simplifying things a little, I essentially have a 'core' module, and then I deploy the 'cloud-functions' module to Cloud Functions, 'backend-service' to GCE, which all depend on 'core' etc.
I'm using bunyan for logging throughout my 'core' module, and when deployed to GCE the logger is configured using '#google-cloud/logging-bunyan' so my logs go to Stack Driver.
Aside: Using this configuration in Google Cloud Functions is causing issues with Error: Endpoint read failed which I think is due to functions not going cold and trying to reuse dead connections, but I'm not 100% sure what the real cause is.
So now I'm trying to log using console.log(arg) where arg is an object, not a string. I want this object to appear in Stack Driver under the jsonPayload but it's being stringified and put into the textPayload field.
It took me awhile, but I finally came across this example in firebase functions samples repository. In the end I settled on something a bit like this:
const Logging = require('#google-cloud/logging');
const logging = new Logging();
const log = logging.log('my-func-logger');
const logMetadata = {
resource: {
type: 'cloud_function',
labels: {
function_name: process.env.FUNCTION_NAME ,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
},
},
};
const logData = { id: 1, score: 100 };
const entry = log.entry(logMetaData, logData);
log.write(entry)
You can add a string severity property value to logMetaData (e.g. "INFO" or "ERROR"). Here is the list of possible values.
Update for available node 10 env vars. These seem to do the trick:
labels: {
function_name: process.env.FUNCTION_TARGET,
project: process.env.GCP_PROJECT,
region: JSON.parse(process.env.FIREBASE_CONFIG).locationId
}
UPDATE: Looks like for Node 10 runtimes they want you to set env values explicitly during deploy. I guess there has been a grace period in place because my deployed functions are still working.
I ran into the same problem, and as stated by comments on #wtk's answer, I would like to add replicating all of the default cloud function logging behavior I could find in the snippet below, including execution_id.
At least for using Cloud Functions with the HTTP Trigger option the following produced correct logs for me. I have not tested for Firebase Cloud Functions
// global
const { Logging } = require("#google-cloud/logging");
const logging = new Logging();
const Log = logging.log("cloudfunctions.googleapis.com%2Fcloud-functions");
const LogMetadata = {
severity: "INFO",
type: "cloud_function",
labels: {
function_name: process.env.FUNCTION_NAME,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
}
};
// per request
const data = { foo: "bar" };
const traceId = req.get("x-cloud-trace-context").split("/")[0];
const metadata = {
...LogMetadata,
severity: 'INFO',
trace: `projects/${process.env.GCLOUD_PROJECT}/traces/${traceId}`,
labels: {
execution_id: req.get("function-execution-id")
}
};
Log.write(Log.entry(metadata, data));
The github link in #wtk's answer should be updated to:
https://github.com/firebase/functions-samples/blob/2f678fb933e416fed9be93e290ae79f5ea463a2b/stripe/functions/index.js#L103
As it refers to the repository as of when the question was answered, and has the following function in it:
// To keep on top of errors, we should raise a verbose error report with Stackdriver rather
// than simply relying on console.error. This will calculate users affected + send you email
// alerts, if you've opted into receiving them.
// [START reporterror]
function reportError(err, context = {}) {
// This is the name of the StackDriver log stream that will receive the log
// entry. This name can be any valid log stream name, but must contain "err"
// in order for the error to be picked up by StackDriver Error Reporting.
const logName = 'errors';
const log = logging.log(logName);
// https://cloud.google.com/logging/docs/api/ref_v2beta1/rest/v2beta1/MonitoredResource
const metadata = {
resource: {
type: 'cloud_function',
labels: {function_name: process.env.FUNCTION_NAME},
},
};
// https://cloud.google.com/error-reporting/reference/rest/v1beta1/ErrorEvent
const errorEvent = {
message: err.stack,
serviceContext: {
service: process.env.FUNCTION_NAME,
resourceType: 'cloud_function',
},
context: context,
};
// Write the error log entry
return new Promise((resolve, reject) => {
log.write(log.entry(metadata, errorEvent), (error) => {
if (error) {
return reject(error);
}
resolve();
});
});
}
// [END reporterror]
I've launched the local Datastore emulator and although wrote and test GCF with the remote Datastore instance (not emulated one). Now I'm trying to use the locally launched Datastore instance for testing purposes, but all requests still going to the cloud instance of Datastore.
Here is the code.
const db = require("#google-cloud/datastore")();
exports.signUp = (req, res) => {
if(!req.body.firstName || !req.body.lastName || !req.body.email) {
res.status(400).send("Incorrect user data passed");
} else {
let key = db.key("User");
console.log("KEY: ", key);
db.insert({
key: key,
data: {
firsName: req.body.firsName,
lastName: req.body.lastName,
email: req.body.email
}
}, (err, apiResponse) => {
console.log(apiResponse);
if(err) {
res.status(400).json({
message: "Error occured during creation"
});
} else {
res.status(200).json({
message: `Created under ${apiResponse}`
});
}
});
}
};
I know about the apiEndpoint (link on documentation) parameter in the Datastore instance configuration object. But should it actually be passed explicitly in the code? I though that there should be some environment variable that will tell default configuration to search for the Datastore emulator first, and then try to use the cloud one.
I've found a lot of tutorials that have gotten me as far as logging in with OAuth in Meteor / MeteorAngular. Unfortunately, they've never gotten me as far as successfully accessing the Microsoft graph API. I know I need to (somehow) transform my user access token into a bearer token, but I haven't found any guides on how to do that in Meteor -- and the sample Node apps I've found to do similar things don't run out of the box, either. I have some routes that should return the data I need, I just can't quite seem to hit them.
Meteor + Microsoft... is darned near nonexistent, as far as I can tell
lol you aren't wrong. However as one of the primary engineers behind Sidekick AI (a scheduling app integrating with Microsoft+Google calendar built with Meteor) I have some very practical examples of using said access_token to work with the Microsoft Graph API with Meteor.
Here's some examples of functions I've written which use the access_token to communicate with the Graph API to work with calendars. Notably the line that matters is where I'm setting
Authorization: `Bearer ${sources.accessToken} // Sources just being the MongoDB collection we store that info in
NOTE: if you need more I'd be happy to provide but didn't want to clutter this initially
NOTE: wherever you see "Outlook" it really should say "Microsoft"; that's just a symptom of uninformed early development.
/**
* This function will use the Meteor HTTP package to make a POST request
* to the Microsoft Graph Calendar events endpoint using an access_token to create a new
* Event for the connected Outlook account
*
* #param { String } sourceId The Sources._id we are using tokens from for the API request
*
* #param { Object } queryData Contains key value pairs which will describe the new Calendar
* Event
*
* #returns { Object } The newly created Event Object
*/
export const createOutlookCalendarEvent = async (
sourceId: string,
queryData: {
subject: any
start: {
dateTime: any
timeZone: string
}
end: {
dateTime: any
timeZone: string
}
isOnlineMeeting: any
body: {
content: string
contentType: string
},
attendees: {
emailAddress: {
address: string
name: string
}
}[]
}
): Promise<object> => {
await attemptTokenRefresh(sourceId)
const sources = Sources.findOne({ _id: sourceId })
const options = {
headers: {
Accept: `application/json`,
Authorization: `Bearer ${sources.accessToken}`,
},
data: queryData,
}
return new Promise((resolve, reject) => {
HTTP.post(`${OauthEndpoints.Outlook.events}`, options, (error, response) => {
if (error) {
reject(handleError(`Failed to create a new Outlook Calendar event`, error, { options, sourceId, queryData, sources }))
} else {
resolve(response.data)
}
})
})
}
/**
* This function will use the Meteor HTTP package to make a GET request
* to the Microsoft Graph Calendars endpoint using an access_token obtained from the
* Microsoft common oauth2 v2.0 token endpoint. It retrieves Objects describing the calendars
* under the connected account
*
* #param sourceId The Sources._id we are using tokens from for the API request
*
* #returns Contains Objects describing the user's connected account's calendars
*/
export const getCalendars = async (sourceId: string): Promise<Array<any>> => {
await attemptTokenRefresh(sourceId)
const sources = Sources.findOne({ _id: sourceId })
const options = {
headers: {
Accept: `application/json`,
Authorization: `Bearer ${sources.accessToken}`,
},
}
return new Promise((resolve, reject) => {
HTTP.get(OauthEndpoints.Outlook.calendars, options, (error, response) => {
if (error) {
reject(handleError(`Failed to retrieve the calendars for a user's connected Outlook account`, error, { options, sources, sourceId }))
} else {
resolve(response.data.value)
}
})
})
}
// reference for the afore-mentioned `OauthEndpoints`
/**
* Object holding endpoints for our Oauth integrations with Google,
* Microsoft, and Zoom
*/
export const OauthEndpoints = {
...
Outlook: {
auth: 'https://login.microsoftonline.com/common/oauth2/v2.0/authorize?',
token: 'https://login.microsoftonline.com/common/oauth2/v2.0/token',
calendars: 'https://graph.microsoft.com/v1.0/me/calendars',
calendarView: 'https://graph.microsoft.com/v1.0/me/calendar/calendarView',
events: 'https://graph.microsoft.com/v1.0/me/calendar/events',
messages: 'https://graph.microsoft.com/v1.0/me/messages',
sendMail: 'https://graph.microsoft.com/v1.0/me/sendMail',
},
...
}