Firebase service account private key exposed to admin.firestore - firebase

I configured firebase admin in my node.js backend into a variable called admin, and I call admin.firestore(). When I do console.log(admin.firestore()), I see my private key of my service account been displayed in the back-end terminal. Here is the console log I see:
Firestore {
_settings: {
credentials: {
private_key: 'my actual private key',
client_email: 'xxxxx'
},
projectId: 'pxxxx3',
firebaseVersion: '8.13.0',
libName: 'gccl',
libVersion: '3.8.6 fire/8.13.0'
},
_settingsFrozen: false,
_serializer: Serializer { createReference: [Function], allowUndefined: false },
_projectId: 'xxxxx',
registeredListenersCount: 0,
_lastSuccessfulRequest: 0,
_backoffSettings: { initialDelayMs: 100, maxDelayMs: 60000, backoffFactor: 1.3 },
_preferTransactions: false,
_clientPool: ClientPool {
concurrentOperationLimit: 100,
maxIdleClients: 1,
clientFactory: [Function],
clientDestructor: [Function],
activeClients: Map {},
terminated: false,
terminateDeferred: Deferred {
resolve: [Function],
reject: [Function],
promise: [Promise]
}
}
}
I am a bit concerned that it might be a security risk. Although it is within the codes in my backend. But should I be concerned?

If data is only ever available on your backend, then it is "secure" in that only people who have permission to access your backend can see it. The problem is not that the data is in the log, the problem is in who you allow to see that log.
If the data never escapes to a client app, then you don't have to worry about random people on the internet from seeing your credentials.

IMHO, if an external entity can log into your system, you have a different kind of problem.
If you think about it, most of the environment variables have to be placed somewhere during runtime. They should not be hardcoded in your code, but in runtime, you need a mechanism to ensure the values are copied into your system. After that, it's all about authorization, only users with the right permissions should be allowed to get into your system.

Related

Problem to generate the client generator component of API Patform with nuxt.js

I have an api-platform project.
https://localhost:8888/api does show the API documentation.
When i want to generate the client generator component with the command:
npx #api-platform/client-generator https://127.0.0.1:8000/api . --generator nuxt
I have this response:
{
api: Api { entrypoint: 'https://127.0.0.1:8000/api', resources: [] },
error: {
response: Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]: [Object],
[Symbol(Response internals)]: [Object]
}
},
response: Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]: { body: [PassThrough], disturbed: false, error: null },
[Symbol(Response internals)]: {
url: 'https://127.0.0.1:8000/api',
status: 200,
statusText: 'OK',
headers: [Headers],
counter: 0
}
},
status: 200
}
No components have been generated and not sure where to go from there.
I did the setup successfully on my side (until I got a CORS issue probably because I was using https://demo.api-platform.com).
Here is my Github repo for a working client side boilerplate'd Nuxt app that I've got.
Some things are questionable like the usage of Moment.js in 2022, the fact that this is a Nuxt app used as an SPA only (ssr: false), some not up to-date configuration like for nuxt-i18n modules, the components: false, the fact that there are still some ESlint warnings/errors in the various files, the use of an additional vue-i18n package on top of nuxt-i18n but the setup looks kinda okay otherwise.
I mean, at the end it's kinda removing all the nice stuff from Nuxt to have a basic usual Vue.js app as an SPA but yeah, fine let's say.
There is this line in entrypoint.js
export const ENTRYPOINT = 'https://demo.api-platform.com'
Maybe setting this one up to your own local address may work.
If it doesn't, maybe try to host it on Heroku or alike, a real hosted server is maybe needed. Otherwise, it could also be a migration/DB/any-other-backend issue.
It's not an issue on the Nuxt.js side itself at least.

Firebase email link authentication with custom domain

The firebase documentation is a little vague on setting things up with a custom dynamic link domain.
In a react native app, I can successfully have a passwordless email authentication link sent to the user inbox.
My const actionCodeSettings = {
url: 'https://xxxxxx.com/some-verify-email-redirect-web-page',
// This must be true.
handleCodeInApp: true,
iOS: {
bundleId: 'com.xxxxxx.xxxxxxapp',
},
android: {
packageName: 'com.xxxxxx.xxxxxxapp',
installApp: true,
minimumVersion: '12',
},
dynamicLinkDomain: 'xxxxxx.com',
};
I already authorized my custom domain and was able to create and use dynamic links to the android app.
However, my custom dynamic link domain is xxxxxx.com/app. This means that my app is listening to the prefix /app:
"android": {
"package": "com.xxxxxx.xxxxxxapp",
"versionCode": 1,
"googleServicesFile": "./firebase/google-services.json",
"permissions": ["ACCESS_COARSE_LOCATION", "ACCESS_FINE_LOCATION"],
"intentFilters": [
{
"action": "VIEW",
"data": [
{
"scheme": "https",
"host": "xxxxxx.com",
"pathPrefix": "/app"
}
],
"category": ["BROWSABLE", "DEFAULT"]
}
]
},
The verification link I get looks like this: https://xxxxxx.com?link=https://xxxxxx-app.firebaseapp.com/__/auth/action?apiKey...continueUrl%3Dhttps://xxxxxx.com/app/some-verify-email-redirect-web-page%26lang%3Den&apn=com.xxxxxx.xxxxxxapp&amv=12&ibi=com.xxxxxx.xxxxxxapp&ifl=https://xxxxxx-app.firebaseapp.com/__/auth/action?apiKey...signIn%26oobCode...continueUrl%3Dhttps://xxxxxx.com/app/...
So I'm not surprised my app doesn't detect it as an app link because it's missing the /app
Interestingly enough, if I use the default xxxxxx.page.link domain for this purpose, then at least it tries to open some app, but of course cannot and redirects to the Play Store.
Do I need to create a dynamic link manually on Firebase to handle this custom domain verification link?
Do I need to add something to my intentFilters?
Any experts out there who can help with this? What am I missing?

How to properly configure Amplify Analytics?

I need some help understanding how to configure AWS Pinpoint analytics in Amplify. I'm currently using Amplify for Auth and have it configured like this in my index.js file:
export const configureAmplify = () => {
window.LOG_LEVEL="DEBUG"
Hub.listen('auth', data => handleAmplifyHubEvent(data))
Hub.listen('analytics', data => handleAmplifyHubEvent(data))
Amplify.configure({
Auth: {
identityPoolId: "redacted",
region: "us-west-2",
userPoolId: "redacted",
userPoolWebClientId: "redacted",
mandatorySignIn: false,
cookieStorage: {
domain: process.env.NODE_ENV === 'development' ? "localhost" : "redacted",
path: '/',
expires: 1,
secure: false
}
}
})
}
To add Analytics, I started by adding this to my configureAmplify() function:
Analytics: {
disabled: false,
autoSessionRecord: true,
AWSPinpoint: {
appId: 'redacted',
region: 'us-east-1',
endpointId: `wgc-default`,
bufferSize: 1000,
flushInterval: 5000, // 5s
flushSize: 100,
resendLimit: 5
}
}
Upon user sign-in or refresh from cookie storage I called
Analytics.updateEndpoint({
address: user.attributes.email, // The unique identifier for the recipient. For example, an address could be a device token, email address, or mobile phone number.
attributes: {
},
channelType: 'EMAIL', // The channel type. Valid values: APNS, GCM
optOut: 'ALL',
userId: user.attributes.sub,
userAttributes: {
}
})
After doing this, it seems to me that the data in the Pinpoint console is not accurate. For example, there are currently 44 sessions displayed when no endpoint filter is applied. If I add an endpoint filter by userAttributes: userId then no matter which ID I select, it shows all 44 sessions associated with that user. I suspect that is because the EndpointID is established at startup, and is not changed by the updateEndpoint call.
I have also tried omitting the Analytics key when I initially configure Amplify, and then calling Analytics.configure() after the user is signed in. With this approach, I can construct a user-specific endpointId. However, I think that doing it this way will mean that I don't capture any of the Authentication events (sign-in, sign-up, auth failure), because Analytics is not configured until after they occur.
So my question is what is the proper timing for configuring Amplify Analytics? How can I accurately capture session, auth and custom events, AND uniquely identify them by user id?
It's not necessary to assign a custom endpoint id, amplify will handle it automatically and all events will be tracked per device. Instead, if you really need it, update the endpoint with the userId after sign-in.
The advantage of adding the userId is that all the endpointIds of a user are automatically associated to that userId, thus when you update a user's attribute, it will be synchronized across the endpoints.
As you are using Cognito, Amazon Cognito can add user IDs and attributes to your endpoints automatically. For the endpoint user ID value, Amazon Cognito assigns the sub value that's assigned to the user in the user pool. To learn about adding users with Amazon Cognito, see Using Amazon Pinpoint Analytics with Amazon Cognito User Pools in the Amazon Cognito Developer Guide.

this.user().context is undefined - Jovo Framework - Alexa

I'm currently using Jovo for cross platform developing Alexa and Google Assistant's skills/actions.
I currently hit a roadblock in which I'm trying to get the previous intent by doing either:
this.user().context.prev[0].request.intent or
this.user().getPrevIntent(0).
But it hasn't worked. I get context is undefined and getPrevIntent doesn't exist. According to the Docs, I need to set up a table with DynamoDB (I did, and verified that it's working since Jovo is able to store the user object), and passed in the default configuration to App. But still can't seem to get it work. Any ideas?
const config = {
logging: false,
// Log incoming JSON requests.
// requestLogging: true,
/**
* You don't want AMAZON.YesIntent on Dialogflow, right?
* This will map it for you!
*/
intentMap: {
'AMAZON.YesIntent': 'YesIntent',
'AMAZON.NoIntent': 'NoIntent',
'AMAZON.HelpIntent': 'HelpIntent',
'AMAZON.RepeatIntent': 'RepeatIntent',
'AMAZON.NextIntent': 'NextIntent',
'AMAZON.StartOverIntent': 'StartOverIntent',
'AMAZON.ResumeIntent': 'ContinueIntent',
'AMAZON.CancelIntent': 'CancelIntent',
},
// Configures DynamoDB to persist data
db: {
awsConfig,
type: 'dynamodb',
tableName: 'user-data',
},
userContext: {
prev: {
size: 1,
request: {
intent: true,
state: true,
inputs: true,
timestamp: true,
},
response: {
speech: true,
reprompt: true,
state: true,
},
},
},
};
const app = new App(config);
Thanks 😊
To make use of the User Context Object of the Jovo Framework, you need to have at least v1.2.0 of the jovo-framework.
You can update the package to the latest version like this: npm install jovo-framework --save
(This used to be a comment. Just adding this as an answer so other people see it as well)

Is it insecure to just validate with SimpleSchema, and not use allow/deny rules?

I am using SimpleSchema (the node-simpl-schema package) in an isomorphic way. Validation messages show up on the client as well as from meteor shell.
My question is whether or not this set up is actually secure, and if I need to also write allow/deny rules.
For example:
SimpleSchema.setDefaultMessages
messages:
en:
"missing_user": "cant create a message with no author"
MessagesSchema = new SimpleSchema({
content: {
type: String,
label: "Message",
max: 200,
},
author_id: {
type: String,
autoform:
defaultValue: ->
Meteor.userId()
custom: ->
if !Meteor.users.findOne(_id: #obj.author_id)
"missing_user"
},
room_id: {
type: String,
}
}, {tracker: Tracker})
In meteor shell I test it out and it works as intended.
> Messages.insert({content: "foo", author_id: "asd"})
/home/max/Desktop/project/meteor/two/.meteor/local/build/programs/server/packages/aldeed_collection2-core.js:501
throw error; // 440
^
Error: cant create a message with no author
Should I duplicate this validation logic in my allow/deny rules? Or can I let my allow function always return true, like I'm doing now?
I have some very simple rules that ensures the application is secure:
Do not use allow/deny rules - deny all client-side write requests.
If the client needs to write something in the database, they must do so through Meteor methods.
Ideally, the Meteor methods would call a function (which can be shared code, or server-specific code), and then check for the validity of the database modifier (using the Schema) would be done inside these functions.
Optionally, you can also create client-side methods, which would clean the object and carry out its own validation using the schema before calling the server-side method.

Resources