Blank Firebase project makes connection to unexpected public database - firebase

Firebase makes a connection to an unexpected public database on a brand new project. The template of the database where it connects looks like https://s-usc1c-nss-XXX.firebaseio.com, where XXX is a 3 digit number.
Connecting to the server listed in the connection above shows an insecure public database, e.g. https://s-usc1c-nss-204.firebaseio.com/.json
Is this normal behavior?
"dependencies": {
"#babel/runtime": "^7.2.0",
"firebase": "^7.14.4",
"react": "^16.6.3",
"react-dom": "^16.6.3"
},

firebaser here
That connection is part of Firebase's internal routing protocol, and is how the Firebase client determines what server/cluster your database is currently hosted on.
The client caches the server/cluster name that it gets back, so you should usually not see this lookup on each connection.

All Realtime Database instances are "public". That's how cloud services work that must be reachable directly by client apps running anywhere on the internet. Without a public service, the client would have no way to make the query.
If you need to secure the database from direct client access, you will need to write security rules to determine who can read and write which locations in the database.

Related

Does my iOS app contain encryption if I use realm?

Finally I’m uploading my app to the App Store connect. I’m using the latest version of realmswift and I only created a default realm database and never explicitly tell the realm to use encryption. In this case does my app contain encryption?
Also, sometimes I can see the outputs which looked like realm established an internet connection(I don’t know whether it’s HTTPS or not) and I don’t know why. Maybe to check the realm’s updates?
In this case, does realm really establish a HTTPS connection? What should I choose? Contain or not?
A Realm database is not encrypted unless you tell it to be an encrypted. e.g. you would need to include the following code for the Realm to be encrypted
// Generate a random encryption key
var key = Data(count: 64)
_ = key.withUnsafeMutableBytes { (pointer: UnsafeMutableRawBufferPointer) in
SecRandomCopyBytes(kSecRandomDefault, 64, pointer.baseAddress!) }
var config = Realm.Configuration(encryptionKey: key)
do {
let realm = try Realm(configuration: config)
} catch let error as NSError {
fatalError("Error opening realm: \(error.localizedDescription)")
}
That being said, by default with iOS 8 and above app files are encrypted using NSFileProtection whenever the device is locked.
When it comes to a sync'd Realm, that's a little different as the on-disk files can be encrypted per above, but the sync'd data stored in MongoDB is not encrypted.
A sync'd realm will establish a connection to the sync server (MongoDB) so that will results in internet traffic. Likewise if you're using REST calls to pull data from MongoDB, those will also result in network activity.

BAD_GATEWAY when connecting Google Cloud Endpoints to Cloud SQL

I am trying to connect from GCP endpoints to a Cloud SQL (PostgreSQL) database in a different project. My endpoints backend is an app engine in the flexible environment using Python.
The endpoints API works fine for non-db requests and for db requests when run locally. But the deployed API produces this result when requiring DB access:
{
"code": 13,
"message": "BAD_GATEWAY",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "application"
}
]
}
I've followed this link (https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine) to create the endpoints project, and this (https://cloud.google.com/appengine/docs/flexible/python/using-cloud-sql-postgres) to link to Cloud SQL from a different project.
The one difference is that I don't use the SQLALCHEMY_DATABASE_URI env variable to connect, but take the connection string from a config file to use with psycopg2 SQL strings. This code works on CE servers in the same project.
Also double checked that the project with the PostgreSQL db was given Cloud SQL Editor access to the service account of the Endpoints project. And, the db connection string works fine if the app engine is in the same project as the Cloud SQL db (not coming from endpoints project).
Not sure what else to try. How can I get more details on the BAD_GATEWAY? That's all that's in the endpoints logfile and there's nothing in the Cloud SQL logfile.
Many thanks --
Dan
Here's my app.yaml:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
env_variables:
SQLALCHEMY_DATABASE_URI: >-
postgresql+psycopg2://postgres:password#/postgres?host=/cloudsql/cloudsql-project-id:us-east1:instance-id
beta_settings:
cloud_sql_instances: cloudsql-project-id:us-east1:instance-id
endpoints_api_service:
name: api-project-id.appspot.com
rollout_strategy: managed
And requirements.txt:
Flask==0.12.2
Flask-SQLAlchemy==2.3.2
flask-cors==3.0.3
gunicorn==19.7.1
six==1.11.0
pyyaml==3.12
requests==2.18.4
google-auth==1.4.1
google-auth-oauthlib==0.2.0
psycopg2==2.7.4
(This should be a comment but formatting really worsen the reading, I will update on here)
I am trying to reproduce your error and I come up with some questions:
How are you handling the environment variables in the tutorials? Have you hard-coded them or are you using environment variables? They are reset with the Cloud Shell (if you are using Cloud Shell).
This is not clear for me: do you see any kind of log file in CloudSQL (without errors) or you don't see even logs?
CloudSQL, app.yaml and requirements.txt configurations are related. Could you provide more information on this? If you update the post, be careful and do not post username, passwords or other sensitive information.
Are both projects in the same region/zone? Sometimes this is a requisite, but I don't see anything pointing this in the documentation.
My intuition points to a credentials issue, but it would be useful if you add more information to the post to better understand where the issue cames from.

Can Firebase RemoteConfig be accessed from cloud functions

I'm using Firebase as a simple game-server and have some settings that are relevant for both client and backend and would like to keep them in RemoteConfig for consistency, but not sure if I can access it from my cloud functions in a simple way (I don't consider going through the REST interface a "simple" way)
As far as I can tell there is no mention of it in the docs, so I guess it's not possible, but does anyone know for sure?
firebaser here
There is a public REST API that allows you to read and set Firebase Remote Config conditions. This API requires that you have full administrative access to the Firebase project, so must only be used on a trusted environment (such as your development machine, a server you control or Cloud Functions).
There is no public API to get Firebase Remote Config settings from a client environment at the moment. Sorry I don't have better news.
This is probably only included in newer versions of firebase (8th or 9th and above if I'm not mistaken).
// We first need to import remoteConfig function.
import { remoteConfig } from firebase-admin
// Then in your cloud function we use it to fetch our remote config values.
const remoteConfigTemplate = await remoteConfig().getTemplate().catch(e => {
// Your error handling if fetching fails...
}
// Next it is just matter of extracting the values, which is kinda convoluted,
// let's say you want to extract `game_version` field from remote config:
const gameVersion = remoteConfigTemplate.parameters.game_version.defaultValue.value
So parameters are always followed by the name of the field that you defined in Firebase console's remote config, in this example game_version.
It's a mouthful (or typeful) but that's how you get it.
Also note that if value is stored as JSON string, you will need to parse it before usage, commonly: JSON.parse(gameVersion).
Similar process is outlined in Firebase docs.

Possibility for only currently connected (not authenticated) and admin user to read and write on certain location

Is there any way to write a security rule or is there any other approach that would make possible only for currently connected (not authenticated) user to write/read certain location - admin should also be able to write/read?
Can a rule be written that disallows users to read of complete list of entries and let them read only entry that matches some identifier that was passed from client?
I'm trying to exchange some data between user and Node.js application through Firebase and that data shouldn't be able to read or write by anyone else other than user and/or admin.
I know that one solution would be that user requests auth token on my server and uses it to authenticate on Firebase and that would make it possible to write rule which prevents reads and writes. However, I'm trying to avoid user connecting to my server so this solution is not first option.
This is in a way session based scenario which is not available in Firebase but I have
some ideas that could solve this kind of problem - if implemented before session management:
maybe letting admin write into /.info/ location which is observed by client for every change and can be read only by active connection - if I understood correctly how .info works
maybe creating .temp location for that purpose
maybe letting admin and connected client could have more access to connection information which would contain some connection unique id, that can be used to create location with that name and use it inside rule to prevent reading and listing to other users
Thanks
This seems like a classic XY problem (i.e. trying to solve the attempted solution instead of the actual problem).
If I understand your constraints correctly, the underlying issue is that you do not wish to have direct connections to your server. This is currently the model we're using with Firebase and I can think of two simple patterns to accomplish this.
1) Store the data in an non-guessable path
Create a UUID or GID or, assuming we're not talking bank level security here, just a plain Firebase ID ( firebaseRef.push().name() ). Then have the server and client communicate via this path.
This avoids the need for security rules since the URLs are unguessable, or close enough to it, in the case of the Firebase ID, for normal uses.
Client example:
var fb = new Firebase(MY_INSTANCE_URL+'/connect');
var uniquePath = fb.push();
var myId = uniquePath.name();
// send a message to the server
uniquePath.push('hello world');
From the server, simply monitor connect, each one that connects is a new client:
var fb = new Firebase(MY_INSTANCE_URL+'/connect');
fb.on('child_added', newClientConnected);
function newClientConnected(snapshot) {
snapshot.ref().on('child_added', function(ss) {
// when the client sends me a message, log it and then return "goodbye"
console.log('new message', ss.val());
ss.ref().set('goodbye');
});
};
In your security rules:
{
"rules": {
// read/write are false by default
"connect": {
// contents cannot be listed, no way to find out ids other than guessing
"$client": {
".read": true,
".write": true
}
}
}
}
2) Use Firebase authentication
Instead of expending so much effort to avoid authentication, just use a third party service, like Firebase's built-in auth, or Singly (which supports Firebase). This is the best of both worlds, and the model I use for most cases.
Your client can authenticate directly with one of these services, never touching your server, and then authenticate to Firebase with the token, allowing security rules to take effect.

What's the best method for passing AWS credentials as user data to an EC2 instance?

I have a job processing architecture based on AWS that requires EC2 instances query S3 and SQS. In order for running instances to have access to the API the credentials are sent as user data (-f) in the form of a base64 encoded shell script. For example:
$ cat ec2.sh
...
export AWS_ACCOUNT_NUMBER='1111-1111-1111'
export AWS_ACCESS_KEY_ID='0x0x0x0x0x0x0x0x0x0'
...
$ zip -P 'secret-password' ec2.sh
$ openssl enc -base64 -in ec2.zip
Many instances are launched...
$ ec2run ami-a83fabc0 -n 20 -f ec2.zip
Each instance decodes and decrypts ec2.zip using the 'secret-password' which is hard-coded into an init script. Although it does work, I have two issues with my approach.
'zip -P' is not very secure
The password is hard-coded in the instance (it's always 'secret-password')
The method is very similar to the one described here
Is there a more elegant or accepted approach? Using gpg to encrypt the credentials and storing the private key on the instance to decrypt it is an approach I'm considering now but I'm unaware of any caveats. Can I use the AWS keypairs directly? Am I missing some super obvious part of the API?
You can store the credentials on the machine (or transfer, use, then remove them.)
You can transfer the credentials over a secure channel (e.g. using scp with non-interactive authentication e.g. key pair) so that you would not need to perform any custom encryption (only make sure that permissions are properly set to 0400 on the key file at all times, e.g. set the permissions on the master files and use scp -p)
If the above does not answer your question, please provide more specific details re. what your setup is and what you are trying to achieve. Are EC2 actions to be initiated on multiple nodes from a central location? Is SSH available between the multiple nodes and the central location? Etc.
EDIT
Have you considered parameterizing your AMI, requiring those who instantiate your AMI to first populate the user data (ec2-run-instances -f user-data-file) with their AWS keys? Your AMI can then dynamically retrieve these per-instance parameters from http://169.254.169.254/1.0/user-data.
UPDATE
OK, here goes a security-minded comparison of the various approaches discussed so far:
Security of data when stored in the AMI user-data unencrypted
low
clear-text data is accessible to any user who manages to log onto the AMI and has access to telnet, curl, wget, etc. (can access clear-text http://169.254.169.254/1.0/user-data)
you are vulnerable to proxy request attacks (e.g. attacker asks the Apache that may or may not be running on the AMI to get and forward the clear-text http://169.254.169.254/1.0/user-data)
Security of data when stored in the AMI user-data and encrypted (or decryptable) with easily obtainable key
low
easily-obtainable key (password) may include:
key hard-coded in a script inside an ABI (where the ABI can be obtained by an attacker)
key hard-coded in a script on the AMI itself, where the script is readable by any user who manages to log onto the AMI
any other easily obtainable information such as public keys, etc.
any private key (its public key may be readily obtainable)
given an easily-obtainable key (password), the same problems identified in point 1 apply, namely:
the decrypted data is accessible to any user who manages to log onto the AMI and has access to telnet, curl, wget, etc. (can access clear-text http://169.254.169.254/1.0/user-data)
you are vulnerable to proxy request attacks (e.g. attacker asks the Apache that may or may not be running on the AMI to get and forward the encrypted http://169.254.169.254/1.0/user-data, ulteriorly descrypted with the easily-obtainable key)
Security of data when stored in the AMI user-data and encrypted with not easily obtainable key
average
the encrypted data is accessible to any user who manages to log onto the AMI and has access to telnet, curl, wget, etc. (can access encrypted http://169.254.169.254/1.0/user-data)
an attempt to decrypt the encrypted data can then be made using brute-force attacks
Security of data when stored on the AMI, in a secured location (no added value for it to be encrypted)
higher
the data is only accessible to one user, the user who requires the data in order to operate
e.g. file owned by user:user with mask 0600 or 0400
attacker must be able to impersonate the particular user in order to gain access to the data
additional security layers, such as denying the user direct log-on (having to pass through root for interactive impersonation) improves security
So any method involving the AMI user-data is not the most secure, because gaining access to any user on the machine (weakest point) compromises the data.
This could be mitigated if the S3 credentials were only required for a limited period of time (i.e. during the deployment process only), if AWS allowed you to overwrite or remove the contents of user-data when done with it (but this does not appear to be the case.) An alternative would be the creation of temporary S3 credentials for the duration of the deployment process, if possible (compromising these credentials, from user-data, after the deployment process is completed and the credentials have been invalidated with AWS, no longer poses a security threat.)
If the above is not applicable (e.g. S3 credentials needed by deployed nodes indefinitely) or not possible (e.g. cannot issue temporary S3 credentials for deployment only) then the best method remains to bite the bullet and scp the credentials to the various nodes, possibly in parallel, with the correct ownership and permissions.
I wrote an article examining various methods of passing secrets to an EC2 instance securely and the pros & cons of each.
http://www.shlomoswidler.com/2009/08/how-to-keep-your-aws-credentials-on-ec2/
The best way is to use instance profiles. The basic idea is:
Create an instance profile
Create a new IAM role
Assign a policy to the previously created role, for example:
{
"Statement": [
{
"Sid": "Stmt1369049349504",
"Action": "sqs:",
"Effect": "Allow",
"Resource": ""
}
]
}
Associate the role and instance profile together.
When you start a new EC2 instance, make sure you provide the instance profile name.
If all works well, and the library you use to connect to AWS services from within your EC2 instance supports retrieving the credentials from the instance meta-data, your code will be able to use the AWS services.
A complete example taken from the boto-user mailing list:
First, you have to create a JSON policy document that represents what services and resources the IAM role should have access to. for example, this policy grants all S3 actions for the bucket "my_bucket". You can use whatever policy is appropriate for your application.
BUCKET_POLICY = """{
"Statement":[{
"Effect":"Allow",
"Action":["s3:*"],
"Resource":["arn:aws:s3:::my_bucket"]}]}"""
Next, you need to create an Instance Profile in IAM.
import boto
c = boto.connect_iam()
instance_profile = c.create_instance_profile('myinstanceprofile')
Once you have the instance profile, you need to create the role, add the role to the instance profile and associate the policy with the role.
role = c.create_role('myrole')
c.add_role_to_instance_profile('myinstanceprofile', 'myrole')
c.put_role_policy('myrole', 'mypolicy', BUCKET_POLICY)
Now, you can use that instance profile when you launch an instance:
ec2 = boto.connect_ec2()
ec2.run_instances('ami-xxxxxxx', ..., instance_profile_name='myinstanceprofile')
I'd like to point out that it is not needed to supply any credentials to your EC2 instance anymore. Using IAM, you can create a role for your EC2 instances. In these roles, you can set fine-grained policies that allow your EC2 instance to, for example, get a specific object from a specific S3 bucket and no more. You can read more about IAM Roles in the AWS docs:
http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html
Like others have already pointed out here, you don't really need to store AWS credentials for an EC2 instance, by using IAM Roles -
https://aws.amazon.com/blogs/security/a-safer-way-to-distribute-aws-credentials-to-ec2/.
I will add that you can employ the same method also for securely storing NON-AWS credentials for you EC2 instance, like say if you have some db credentials you want to keep secure. You save the non-aws credentials on a S3 Bukcet, and use IAM role to access that bucket.
you can find more detailed information on that here - https://aws.amazon.com/blogs/security/using-iam-roles-to-distribute-non-aws-credentials-to-your-ec2-instances/

Resources