Does the Firebase Admin Node.JS SDK support exponential back off? - firebase

I'm looking to set up a simple Node.JS server which will listen for requests from an app I'm building and from it build a "send message" request to Firebase Cloud Messaging (as stated here: https://firebase.google.com/docs/cloud-messaging/send-message) so that a push notification is sent out to relevant client-side apps. In order to do this one needs to make use of the Firebase Admin SDK.
On Google's Firebase Cloud Messaging documentation under "Server Environments" (https://firebase.google.com/docs/cloud-messaging/server), it's said that running the Firebase Admin SDK requires a server environment which is able to handle requests and resend them using exponential back off. In this case I guess using exponential back off means that the "send message" request to Firebase Cloud Messaging must be retried using an exponential back off algorithm.
It's not said in the documentation however if the Node.JS Firebase Admin SDK already supports such exponential back off functionality or if I have to implement it on my own when using aspects of the Admin SDK.
According to an earlier Stack overflow post (Firebase Admin SDK Java Backoff/retry) the Java SDK does support exponential back off, but I'm not sure whether the Node.JS one does.

firebaser here
The default retry configuration in the Firebase Admin Node.js SDK automatically retries up to 4 times on connection reset, timeout, and HTTP 503 errors. If the error response contains Retry-After header, the default retry config respects that, too.
In the default config, the backOffFactor is set to 0.5. This results in the exponential retry pattern of 0s, 1s, 2s, and 4s.
/**
* Default retry configuration for HTTP requests. Retries up to 4 times on connection reset and timeout errors
* as well as HTTP 503 errors. Exposed as a function to ensure that every HttpClient gets its own RetryConfig
* instance.
*/
export function defaultRetryConfig(): RetryConfig {
return {
maxRetries: 4,
statusCodes: [503],
ioErrorCodes: ['ECONNRESET', 'ETIMEDOUT'],
backOffFactor: 0.5,
maxDelayInMillis: 60 * 1000,
};
}
Reference to code in documentation

Related

SocketTimeoutException when calling load for DynamoDBMapper

I am getting sometimes this error when calling load for DynamoDBMapper:
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1593)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1498)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at com.amazonaws.internal.EC2ResourceFetcher.doReadResource(EC2ResourceFetcher.java:82)
at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.getToken(InstanceMetadataServiceResourceFetcher.java:91)
at com.amazonaws.internal.InstanceMetadataServiceResourceFetcher.readResource(InstanceMetadataServiceResourceFetcher.java:69)
at com.amazonaws.internal.EC2ResourceFetcher.readResource(EC2ResourceFetcher.java:66)
at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsEndpoint(InstanceMetadataServiceCredentialsFetcher.java:58)
at com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.getCredentialsResponse(InstanceMetadataServiceCredentialsFetcher.java:46)
at com.amazonaws.auth.BaseCredentialsFetcher.fetchCredentials(BaseCredentialsFetcher.java:112)
at com.amazonaws.auth.BaseCredentialsFetcher.getCredentials(BaseCredentialsFetcher.java:68)
at com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:166)
at com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper.getCredentials(EC2ContainerCredentialsProviderWrapper.java:75)
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1251)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:827)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:777)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:764)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:738)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:698)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:680)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:544)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:524)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:5110)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:5077)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeGetItem(AmazonDynamoDBClient.java:2197)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.getItem(AmazonDynamoDBClient.java:2163)
at com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper.load(DynamoDBMapper.java:431)
at com.amazonaws.services.dynamodbv2.datamodeling.DynamoDBMapper.load(DynamoDBMapper.java:448)
at com.amazonaws.services.dynamodbv2.datamodeling.AbstractDynamoDBMapper.load(AbstractDynamoDBMapper.java:80)
I have 2 timeouts to PUT /latest/api/token, then I get a response. I am not sure what is wrong exactly or why do I have this behavior sometimes, but this leads to latency in my application.
Do I need to modify something in the settings? Is it related to DynamoMapper? Should I use low level Dynamo API?
These issues can occur when:
You call a remote API that takes too long to respond or that is unreachable.
Your API call doesn't get a response within the socket timeout.
Your API call doesn't get a response within the timeout period of your Lambda function.
If you make an API call using an AWS SDK and the call fails, the SDK automatically retries the call https://aws.amazon.com/premiumsupport/knowledge-center/lambda-function-retry-timeout-sdk/. How long and how many times the SDK retries is determined by settings that vary among each SDK. Here are the default values of these settings:
see the SDK client configuration documentation: https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/ClientConfiguration.html

Firebase Functions: Could not load default credentials

I have a Firebase Function that subscribes to a Cloud PubSub topic. App is initialized very simply like this:
import * as admin from 'firebase-admin';
admin.initializeApp();
I'm getting this error:
"Error: Could not load the default credentials. Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
at GoogleAuth.getApplicationDefaultAsync (/srv/functions/node_modules/google-auth-library/build/src/auth/googleauth.js:161:19)
at process._tickCallback (internal/process/next_tick.js:68:7)"
Here's the weird thing. It typically works. In other words, if I trigger it a second time it works. And a third time. Most often it seems to fail the first time it runs after a new firebase deploy and possibly on a "cold start."
Not sure what I'm doing wrong and why it would fail only on the first run.
SOLVED! This answer helped:
Error: Could not load the default credentials (Firebase function to firestore)
From within a Firebase Function for an API call, I was publishing to a Cloud PubSub topic like this:
pubsub.topic(topicName).publish(dataBuffer, customAttributes)
I was not awaiting the response and was immediately sending the 2XX HTTP response back to the client. The execution seemed to continue fine, but obviously it did not behave as intended.
Sometimes the API response call itself would fail (and never publish the message), but sometimes not. In other cases, the publish would succeed but the Firebase Function subscribing to the topic would fail!
In all cases, this seemed to resolve itself after running the script a second time. For this reason, I still believe it had something to do with a cold start.
But since I changed it to await like this:
await pubsub.topic(topicName).publish(dataBuffer, customAttributes)
I have not seen this problem happen again.

How can I initialize firebase app in a serverless model? What is the concurrent app limit?

I created a serverless function that performs that Firebase Token Validation.
Everything works as intended. Except, I have I get errors on subsequent calls to initialize my app that the default app already exists (same container). This raises some questions.
If my serverless infrastructure was to spin up multiple concurrent containers, each working to initialize the app. Would this also cause this error? That the app is initiailized elsewhere? Or is this error isolated to local instances?
If its the latter, If I provide a named app based on the container it is spun up in, is there a firebase limit to the maximum number of apps that can be initialized at once?
This is how I am initializing the app now:
cred = credentials.Certificate(SERVICE)
firebase_admin.initialize_app(cred)
I could do this but am not sure about firebase app limits or concurrent initializations (cant find any specifics in docs):
cred = credentials.Certificate(SERVICE)
firebase_admin.initialize_app(cred, 'APP-NAME-[CONTAINERID]')
Or, should I just re-write this using my own JWT Decoder and grabbing the public keys from google?
And here is the full error:
Error occurred setting firebase credentials: The default Firebase app already exists. This means you called initialize_app() more than once without providing an app name as the second argument. In most cases you only need to call initialize_app() once. But if you do want to initialize multiple apps, pass a second argument to initialize_app() to give each app a unique name.
UPDATE: AWS Lambda, Python.
I am going to test out with the following, to prevent re-initializing the app within the same container on warm function executions and move forward with the assumption that there are no API limits on performing auth.validate_id_token() and that this won't conflict with concurrent container executions. Ill report back if it tests out differently.
try:
firebase_admin.get_app()
logger.info('firebase already intialized.')
except ValueError as e:
logger.info('firebase not initialized. initialize.')
cred = credentials.Certificate(SERVICE)
firebase_admin.initialize_app(cred)
I will probably still migrate to another JWT validation to reduce function size (since I already have a jwt library for my own app use) and migrate away from relying on Firebase API to decode it.
If you get an error when initializing the admin SDK that says the default app already exists, that just means you're trying to init the admin SDK twice in the same process. Obviously, don't do that. If you init once and only once per process, you will never see this error.
You will have to take some care to only call the init method once per server instance. It's not clear exactly what you're doing from the code you've shown. I don't know about python, but with node, you can init once in a global context without problems. If you need to init during a function execution, you should have some flag to check that ensures the default Firebase app hasn't already been initialized, and init only conditionally based on that flag.

Hitting 100 active connections limit in test env with only two users

I have a single web client and a few Lambda functions which use the Admin SDK. I've noticed recently that I've bumped into the 100 simultaneous connection limit but I really shouldn't be anywhere near that limit. Also it would appear that the connections established by my Lamba functions are not dropping off even after the function has completed.
Any idea on:
how I can prevent this run-up on connections from happening?
how I can release connections established by past Lambda scripts?
how can I monitor which processes/threads/stacks are holding connections?
Note: this is a testing environment I'm working out of so I'd prefer to keep this in the free tier and my requirements should definitely not be running into the 100 active limit. I am on a paid plan in prod.
I attempt to avoid calling initializeApp more than once by using the following connection code. In the example I'm talking about I only have a single database as a backend and so the default "name" of DEFAULT is used each time.
const runningApps = new Set(firebase.apps.map(i => i.name));
this.app = runningApps.has(name)
? firebase.app()
: firebase.initializeApp({
credential: firebase.credential.cert(serviceAccount),
databaseURL: config.databaseUrl
});
I'm now trying to explicitly close connections with goOffline but that leads to another issue where on the second connection -- aka, where the DEFAULT application is already setup and it just reuses the connection already established I get the following logging:
# Generated as result of `goOnline`
Connecting to Firebase: [https://xyz.firebaseio.com]
appears to be already connected
# Listening on ".info/connected" comes back as true, resulting in:
AbstractedAdmin: connected to [DEFAULT]
# but then I get this error
NotAllowed: You must first connect before using the database() API at Object._getFirebaseType
The fact that you have unexpected incoming connections to the database, makes it seem like the stale instances keep an open connection.
Best I can think off is to call goOffline() in your function before it completes to explicitly disconnect. That would probably also mean you have to call goOnline at the start of the function, since it might be running on an instance that previously went offline. Both goOnline and goOffline are synchronous calls afaik, but there's definitely going to be some time between going online and the data becoming available in your app.
If Lambda has a way for you to detect life-cycle events of its instances, that would be the preferred place to call goOffline and goOnline.
admin.initializeApp should only get called once in your script/node app.
The Firebase SDK's talks HTTP2 to the Firebase cloud system, so I'm not sure why you would encounter max connection issues as unique sockets are not stood up per call.
One thing to look out for is that calls to 3rd part API's (such as sendgrid) are not supported on the free tier.

Can Cloud Functions for Firebase be used across projects?

I was hoping to trigger a Pub/Sub function (using functions.pubsub / onPublish) whenever a new Pub/Sub message is sent to a topic/subscription in a third-party project i.e. cross projects.
After some research and experimentation I found that TopicBuilder throws an error if the topic name contains a / and it defaults to "projects/" + process.env.GCLOUD_PROJECT + "/topics/" + topic (https://github.com/firebase/firebase-functions/blob/master/src/providers/pubsub.ts).
I also found a post in Stack Overflow that says that "Firebase provides a (relatively thin) wrapper around Google Cloud Functions"
(What is the difference between Cloud Function and Firebase Functions?)
This led me to look into Google Cloud Functions. Whilst I was able to create a subscription in a project I own to a topic in a third-party project - after changing permissions in IAM - I could not find a way associate a function with the topic. Nor was I successful in associating a function with a topic and subscription in a third-party project. In the console I only see the topics in my project and I had no success using gcloud.
Has anyone had any success in using a function across projects and, if so, how did you achieve this and is there a documentation URL you could provide? If a function can't be triggered by a message to a topic and subscription in a third-party project can you think of a way that I could ingest third-party Pub/Sub data?
As Pub/Sub fees are billed to the project that contains the subscription I would prefer that the subscription resides in the third-party project with the topic.
Thank you
Google Cloud Functions currently does not not allow a function to listen to a resource in another project. For Cloud Pub/Sub triggers specifically you could get around this by deploying an HTTP-function and add a Pub/Sub push subscription to the topic that you want to fire that cross-project function.
A Google Cloud Function can't be triggered by a subsription to a topic of another project (since you can't subscribe to another project's topic).
But a Google Cloud Function can publish to a topic of another project (and then subscribers of this topic will be triggered).
I solved it by establishing a Google Cloud Function in the original project which listens to the original topic and reacts with publishing to a new topic in the other project. Therefore, the service account (...#appspot.gserviceaccount.com) of this function "in the middle" needs to be authorized by the new topic (console.cloud.google.com/cloudpubsub/topic/detail/...?project=...), i.e. add principal with role: "Pub/Sub Publisher"
import base64
import json
import os
from google.cloud import pubsub_v1
#https://cloud.google.com/functions/docs/calling/pubsub?hl=en#publishing_a_message_from_within_a_function
# Instantiates a Pub/Sub client
publisher = pubsub_v1.PublisherClient()
def notify(event, context):
project_id = os.environ.get('project_id')
topic_name = os.environ.get('topic_name')
# References an existing topic
topic_path = publisher.topic_path(project_id, topic_name)
message_json = json.dumps({
'data': {'message': 'here would be the message'}, # or you can pass the message of event/context
})
message_bytes = message_json.encode('utf-8')
# Publishes a message
try:
publish_future = publisher.publish(topic_path, data=message_bytes)
publish_future.result() # Verify the publish succeeded
return 'Message published.'
except Exception as e:
print(e)
return (e, 500)
google endpoints can be a easier solution to add auth to the function http.

Resources