Note: For clarification this is not the Firebase API Key, this may be more like a token...something that the client app possesses, and the server endpoint verifies.
We are trying to do even better to secure an API Key (think token that is used to validate a client to an endpoint). This will all be on our internal network, but we still want to be sure that only our mobile client can call the endpoint.
I was thinking that we could put the API Key in a Firebase remote config parameter (with an invalid default value built into the app). However, the Firebase documentation for remote config says:
Don't store confidential data in Remote Config parameter keys or parameter values. It is possible to decode any parameter keys or values stored in the Remote Config settings for your project.
I wasn't sure if this is just referring to the default values that are bundled with the app, or if it is also for values that are loaded remotely. Once we have the key, we can encrypt it and store it on the device via our MDM provider.
Also, is the transfer of the remote config data to the app encrypted or done clear text?
Thanks for any more information that anyone can provide about the remote config.
It depends on how secure you want to keep your API Key. What does the API key allow someone to do? If it's simply to identify your app to another service, for example the YouTube Data API, then the worst that can happen is that a malicious user uses up your quota for that resource. On the other hand, if the key allows the holder to make some irreversible changes to important data without further authentication and authorization, then you never want it stored on their device in any form.
Your quote from the Firebase documentation answers your question. In general, you should not be storing private keys in your app. Check out the answers to this question for thorough explanations.
Using Firebase's Remote Config is hardly more secure than shipping keys in the app bundle. Either way, the data ends up on users' hardware. A malicious person can then theoretically access it, no matter how difficult we may think that is to do.
Also, I can't say for sure (you should be able to easily test this) but I HIGHLY doubt that remote config values are sent as plain text. Google does everything over https by default.
#Frank van Puffelen can confirm this, but from my understanding Firebase Remote Config uses HTTPS over HTTP requests which makes it harder to sniff information shared between the app and Firebase Remote Config vs. decompiling the APK and reading the strings generated if using string constants generated by Gradle build configurations. For instance, when one debugs an app with a Network Proxy sniffer such as Charles Proxy you can’t view the endpoint details unless the app is compiled in Debug mode due to HTTPs requests and newer security measures in the latest API versions.
See What makes "https" sites more secure than "http"?.
HTTP protocol doesn’t use data encryption when transferring it, so your personal information can be intercepted or even manipulated by third parties. To capture network information (passwords, credit card numbers, users IDs, etc.) hackers use a method called “sniffing”. If network packets aren’t encrypted the data within them can be read and stolen with a help of hacker application.
Alternatively, HTTPS keeps any kind of data, including passwords, text messages, and credit card details, safe during transits between your computer and the servers. HTTPS keeps your data confidential by using the TSL protocol, frequently referred to as SSL, a secure certificate which offers three layers of protection, such as encryption, data integrity, and authentication.SSL certificates use what is known as asymmetric Public Key Cryptography, or a Public Key Infrastructure (PKI) system. A PKI system uses two different keys to encrypt communications: a public key and a private key. Anything that is encrypted with the public key can only be decrypted by the corresponding private key and vice-versa.Also, HTTPS can protect you from such hacker attacks as man-in-the-middle attacks, DNS rebinding, and replay attacks.
Further Security Measures
Dexguard offers String encryption according to their landing page. I've sent them a message and am awaiting how much this would cost for an indie developer.
Using a public/private API key exchange may be an additional layer of security. However, I need to research the implementation further to better understand this option.
Related
As the title states, I've gotten this email for both projects I've made public on Github. One is a landing page for a local business and the other is a CRUD app I have on the App Store; both of which are using Firebase as the backend.
Is the API key being visible on Github such a security risk?
I've done some research after following the instructions in the email to restrict my API and have heard that you cannot make web service requests with a restricted API key.
I just want to show my repos for the projects for the application process and obviously don't want anything bad to happen with them by doing so.
Aren't Firebase APIs meant to be public?
If so, is it just my database rules that need to be stronger/more verbose?
If any more context is needed, please let me know!
Cheers!
NOTE: I'm still very new to programming so a lot of this is over my head
For Firebase apiKey in a web app you are intended to make this key public, so you should ignore this email -- see: https://stackoverflow.com/a/37484053/771768
Hopefully Best practices for securely using API keys helps.
I'm uncertain as to what you're doing specifically that's resulting in the email but it is warranted.
Please be very careful with API keys.
As the name suggests, these are like keys in that they unlock access to stuff. With digital keys, the additional challenge is that, once obtained, infinite copies of the key may be distributed (and these are usable until the API key is revoked).
There are (often) other (complementary|alternative) ways to authenticate APIs but, as I think you've discovered, sometimes you are required to use API keys.
In the case where they're required, you should endeavor to use complementary authentication mechanisms too in order to try to mitigate overuse and you should continue to be very judicious in your publication of these keys.
I suspect you should not be including (any) keys (ever) in your GitHub repos.
One rule of thumb is that vendors (like Google) use API keys as a way to limit access to (often paid) resources. If the vendor is giving you a key, they're often (not always) using the key as a way to determine how to charge you for an API too. If you're giving the key to others, you're giving other people the possibility of potentially incurring charges on your behalf.
I don't wish to scare you but I would like you to leave this question being very cautious when using keys even if only this causes you to read up more on the consequences of using them.
Usually, this story goes like "client encrypts using public key - server decrypts using the very-safely-stored private key'.
Well, I have the opposite issue.
In a mobile app, I am using a web service library, whose API requires 2 secret keys from my personal account of that service. Anyone having access to these 2 secret keys can basically use them to call the same service's APIs as if it was from my app. So, I definitely do not want to embed those keys in the app as a decompilation might easily spoil them.
So I thought I'd store those keys server-side, and send them encrypted with a public key to the app. Then, the app would decrypt them using the private key stored in the app itself.
I know it's still not secure, but at least, a simple man-in-the-middle attack or a binary decompilation analysis will not scream "Service API keys here, come and get them!". The intention here is just to make it harder for someone to get a hold of those keys.
Do you think this would be a good idea? Do you have any other alternatives?
The secure way to handle you private keys is to keep them on the server and never release them to the client.
For each approved action create a server endpoint (e.g. an AWS Lambda). The server endpoint knows the private keys, but the app just knows where the endpoints are. This restricts the functionality to only what you approve, but the endpoints themselves can be discovered and could be used by other people without going via your app.
The endpoints can use some authentication such as JWT Bearer tokens (see https://www.jsonwebtoken.io/ ) to ensure they are only used by the application, but this requires server side knowledge of who is registered with the app.
Alternatively, if the private keys cannot be used for actions you do not want your application users taking, is it worth protecting them?
There are good reasons for aiming to make things hard but not secure, for example, the cost of creating all those endpoints I mentioned vs the risk of someone abusing the private keys. Unfortunately that means someone agreeing to a compromise and I can't advice you on the best compromise.
I am currently working on a customized media center/box product for my employer. It's basically a Raspberry Pi 3b+ running Raspian, configured to auto-update periodically via apt. The device accesses binaries for proprietary applications via a private, secured apt repo, using a pre-installed certificate on the device.
Right now, the certificate on the device is set to never expire, but going forward, we'd like to configure the certificate to expire every 4 months. We also plan to deploy a unique certificate per device we ship, so the certs can be revoked (i.e. in case the customer reports the device as stolen).
Is there a way, via apt or OpenStack/Barbican KMS to:
Update the certs for apt-repo access on the device periodically.
Setup key-encryption-keys (KEK) on the device, if we need the device to be able to download sensitive data, such as an in-memory cached copy of customer info.
Provide a mechanism for a new key to be deployed on the device if the currently-used key has expired (i.e. the user hasn't connected the device to the internet for more than 4 months). Trying to wrap my head around this one, since the device now (in this state) has an expired certificate, and I can't determine how to let it be trusted to pull a new one.
Allow keys to be tracked, revoked, and de-commissioned.
Thank you.
Is there a way I could use Barbican to:
* Update the certs for apt-repo access on the device periodically.
Barbican used to have an interface to issue certs, but this was
removed. Therefore barbican is simply a service to generate and store
secrets.
You could use something like certmonger. certmonger is a client side
daemon that generates cert requests and submits them to a CA. It then
tracks those certs and requests new ones when the certs are going to
expire.
Setup key-encryption-keys (KEK) on the device, if we need the
device to be able to download sensitive data, such as an in-memory
cached copy of customer info.
To use barbican, you need to be able to authenticate and retrieve
something like a keystone token. Once you have that, you can use
barbican to generate key encryption keys (which would be stored in the
barbican database) and download them to the device using the secret
retrieval API.
Do you need/want the KEK's escrowed like this though?
Provide a mechanism for a new key to be deployed on the device if
the currently-used key has expired (i.e. the user hasn't connected
the device to the internet for more than 4 months).
Barbican has no mechanism for this. This is client side tooling that
would need to be written. You'd need to think about authentication.
Allow keys to be tracked, revoked, and de-commissioned.
Same as above. Barbican has no mechanism for this.
According to Google Cloud all customer data is encrypted automatically. Then why isn't my data encrypted when I export it to a google storage and download it from there? Do I need to enable some service anyway for the encryption to work?
I really appreciate your help, thanks!
Your data is always encrypted by default on server-side. What it means is that, once Google receives the data, it encrypts it. It is recommended that you always sent your data over HTTPS or TLS. Data is automatically and transparently decrypted when read by an authorised user.
You can use the other two options for server side encryption, and you can also encrypt the data on your side, before sending it. I think this option suits your concern, but you'll have to manage your own encryption from your side, and ensure you never less your keys. Nevertheless, GCP will encrypt your data again once received.
Some web applications, like Google Docs, store data generated by the users. Data that can only be read by its owner. Or maybe not?
As far as I know, this data is stored as is in a remote database. So, if anybody with enough privileges in the remote system (a sysadmin, for instance) can lurk my data, my privacy could get compromised.
What could be the best solution to store this data encrypted in a remote database and that only the data's owner could decrypt it? How to make this process transparent to the user? (You can't use the user's password as the key to encrypt his data, because you shouldn't know his password).
If encryption/decryption is performed on the server, there is no way you can make sure that the cleartext is not dumped somewhere in some log file or the like.
You need to do the encryption/decryption inside the browser using JavaScript/Java/ActiveX or whatever. As a user, you need to trust the client-side of the web service not to send back the info unencrypted to the server.
Carl
I think Carl, nailed it on the head, but I wanted to say that with any website, if you are providing it any confidential/personal/privileged information then you have to have a certain level of trust, and it is the responsibility of the service provider to establish this trust. This is one of those questions that has been asked many times, across the internet since it's inception, and it will only continue to grow until we all have our own SSL certs encoded on our fingerprint, and even then we will have to ask the question 'How do I know that the finger is still attached to the user?'.
Well, I'd consider a process similar to Amazons AWS. You authenticate with a private password that is not saved remotely. Just a hash is used to validate the user. Then you generate a certificate with one of the main and long-tested algorithms and provide this from a secure page. Then a public/private key algorithm can be used to encrypt things for the users.
But the main problem remains the same: If someone with enough privileges can access the data (say: hacked your server), you're lost. Given enough time and power, everything could be breaked. It's just a matter of time.
But I think algorithms and applications like GPG/PGP and similar are very well known and can be implemented in a way that secure web applications - and keep the usability at a score that the average user can handle.
edit I want to catch up with #Carl and Unkwntech and add their statement: If you don't trust the site itself, don't give private data away. That's even before someone hacks their servers... ;-)
Auron asked: How do you generate a key for the client to encrypt/decrypt the data? Where do you store this key?
Well, the key is usually derived from some password the user has chosen. You don't store it, you trust the user to remember it. What you can store is maybe some salt value associated to that user, to increase security against rainbow-table attacks for instance.
Crypto is hard to get right ;-) I would recommend to look at the source code for AxCrypt and for Xecrets' off-line client.
Carl
No, you can't use passwords, but you could use password hashes. However, Google Docs are all about sharing, so such a method would require storing a copy of the document for each user.