I want to ask what is the risk involved if someone reverse engineer the app and get license key, maps apptoken and app id from androidmanifest.xml
Thanks.
On the one hand you can store the app_id & app_code on a server and call them from there.
On the other hand you need to consider that the license key is bind to your namespace. Which is another security mechanism. It won't work for any app in a different combination.
Related
As the title states, I've gotten this email for both projects I've made public on Github. One is a landing page for a local business and the other is a CRUD app I have on the App Store; both of which are using Firebase as the backend.
Is the API key being visible on Github such a security risk?
I've done some research after following the instructions in the email to restrict my API and have heard that you cannot make web service requests with a restricted API key.
I just want to show my repos for the projects for the application process and obviously don't want anything bad to happen with them by doing so.
Aren't Firebase APIs meant to be public?
If so, is it just my database rules that need to be stronger/more verbose?
If any more context is needed, please let me know!
Cheers!
NOTE: I'm still very new to programming so a lot of this is over my head
For Firebase apiKey in a web app you are intended to make this key public, so you should ignore this email -- see: https://stackoverflow.com/a/37484053/771768
Hopefully Best practices for securely using API keys helps.
I'm uncertain as to what you're doing specifically that's resulting in the email but it is warranted.
Please be very careful with API keys.
As the name suggests, these are like keys in that they unlock access to stuff. With digital keys, the additional challenge is that, once obtained, infinite copies of the key may be distributed (and these are usable until the API key is revoked).
There are (often) other (complementary|alternative) ways to authenticate APIs but, as I think you've discovered, sometimes you are required to use API keys.
In the case where they're required, you should endeavor to use complementary authentication mechanisms too in order to try to mitigate overuse and you should continue to be very judicious in your publication of these keys.
I suspect you should not be including (any) keys (ever) in your GitHub repos.
One rule of thumb is that vendors (like Google) use API keys as a way to limit access to (often paid) resources. If the vendor is giving you a key, they're often (not always) using the key as a way to determine how to charge you for an API too. If you're giving the key to others, you're giving other people the possibility of potentially incurring charges on your behalf.
I don't wish to scare you but I would like you to leave this question being very cautious when using keys even if only this causes you to read up more on the consequences of using them.
Usually, this story goes like "client encrypts using public key - server decrypts using the very-safely-stored private key'.
Well, I have the opposite issue.
In a mobile app, I am using a web service library, whose API requires 2 secret keys from my personal account of that service. Anyone having access to these 2 secret keys can basically use them to call the same service's APIs as if it was from my app. So, I definitely do not want to embed those keys in the app as a decompilation might easily spoil them.
So I thought I'd store those keys server-side, and send them encrypted with a public key to the app. Then, the app would decrypt them using the private key stored in the app itself.
I know it's still not secure, but at least, a simple man-in-the-middle attack or a binary decompilation analysis will not scream "Service API keys here, come and get them!". The intention here is just to make it harder for someone to get a hold of those keys.
Do you think this would be a good idea? Do you have any other alternatives?
The secure way to handle you private keys is to keep them on the server and never release them to the client.
For each approved action create a server endpoint (e.g. an AWS Lambda). The server endpoint knows the private keys, but the app just knows where the endpoints are. This restricts the functionality to only what you approve, but the endpoints themselves can be discovered and could be used by other people without going via your app.
The endpoints can use some authentication such as JWT Bearer tokens (see https://www.jsonwebtoken.io/ ) to ensure they are only used by the application, but this requires server side knowledge of who is registered with the app.
Alternatively, if the private keys cannot be used for actions you do not want your application users taking, is it worth protecting them?
There are good reasons for aiming to make things hard but not secure, for example, the cost of creating all those endpoints I mentioned vs the risk of someone abusing the private keys. Unfortunately that means someone agreeing to a compromise and I can't advice you on the best compromise.
We are building a multi-tenant cloud-based web product where customer data is stored in single Database instance. There are certain portion of customer specific business data which is highly sensitive. The sensitive business data should be protected such that nobody can access it except the authorized users of the customer (neither through application not through accessing Database directly). Customer want to make sure even the platform provider(us) is not able to access specific data by any means. They want us to clearly demonstrate Data security in this context. I am looking for specific guidance in the following areas:
How to I make sure the data is protected at Database level such that even the platform provider cannot access the data.
Even if we encrypt the Data, the concern is that anyone with the decryption key can decrypt the data
What is the best way to solve this problem?
Appreciate your feedback.
"How to I make sure the data is protected at Database level such that even the platform provider cannot access the data"
-- As you are in a Multi-Tenanted environment, First of all you would have to "single tenant your databases" so one DB per customer. Then you need to modify the application to pick up the database from some form of config.
For encryption as you are in Azure you would have to use the Azure Key vault with your own keys or customer's own keys. you then configure SQL to use these keys to encrypt the data. see here and here
if you want the database to stay multi-tenanted, you would need to do the encryption at the application level. However this would need the application to know about customer keys, hence I dont think that this would be a valid solution.
"Even if we encrypt the Data, the concern is that anyone with the decryption key can decrypt the data" - yep anyone with the keys can access the data. For this you would need to set the access controls appropriately on your key vault.. so the customer can see only their keys.
In the end as you are the service provider.. the customers would have to trust you some what :)
Note: For clarification this is not the Firebase API Key, this may be more like a token...something that the client app possesses, and the server endpoint verifies.
We are trying to do even better to secure an API Key (think token that is used to validate a client to an endpoint). This will all be on our internal network, but we still want to be sure that only our mobile client can call the endpoint.
I was thinking that we could put the API Key in a Firebase remote config parameter (with an invalid default value built into the app). However, the Firebase documentation for remote config says:
Don't store confidential data in Remote Config parameter keys or parameter values. It is possible to decode any parameter keys or values stored in the Remote Config settings for your project.
I wasn't sure if this is just referring to the default values that are bundled with the app, or if it is also for values that are loaded remotely. Once we have the key, we can encrypt it and store it on the device via our MDM provider.
Also, is the transfer of the remote config data to the app encrypted or done clear text?
Thanks for any more information that anyone can provide about the remote config.
It depends on how secure you want to keep your API Key. What does the API key allow someone to do? If it's simply to identify your app to another service, for example the YouTube Data API, then the worst that can happen is that a malicious user uses up your quota for that resource. On the other hand, if the key allows the holder to make some irreversible changes to important data without further authentication and authorization, then you never want it stored on their device in any form.
Your quote from the Firebase documentation answers your question. In general, you should not be storing private keys in your app. Check out the answers to this question for thorough explanations.
Using Firebase's Remote Config is hardly more secure than shipping keys in the app bundle. Either way, the data ends up on users' hardware. A malicious person can then theoretically access it, no matter how difficult we may think that is to do.
Also, I can't say for sure (you should be able to easily test this) but I HIGHLY doubt that remote config values are sent as plain text. Google does everything over https by default.
#Frank van Puffelen can confirm this, but from my understanding Firebase Remote Config uses HTTPS over HTTP requests which makes it harder to sniff information shared between the app and Firebase Remote Config vs. decompiling the APK and reading the strings generated if using string constants generated by Gradle build configurations. For instance, when one debugs an app with a Network Proxy sniffer such as Charles Proxy you can’t view the endpoint details unless the app is compiled in Debug mode due to HTTPs requests and newer security measures in the latest API versions.
See What makes "https" sites more secure than "http"?.
HTTP protocol doesn’t use data encryption when transferring it, so your personal information can be intercepted or even manipulated by third parties. To capture network information (passwords, credit card numbers, users IDs, etc.) hackers use a method called “sniffing”. If network packets aren’t encrypted the data within them can be read and stolen with a help of hacker application.
Alternatively, HTTPS keeps any kind of data, including passwords, text messages, and credit card details, safe during transits between your computer and the servers. HTTPS keeps your data confidential by using the TSL protocol, frequently referred to as SSL, a secure certificate which offers three layers of protection, such as encryption, data integrity, and authentication.SSL certificates use what is known as asymmetric Public Key Cryptography, or a Public Key Infrastructure (PKI) system. A PKI system uses two different keys to encrypt communications: a public key and a private key. Anything that is encrypted with the public key can only be decrypted by the corresponding private key and vice-versa.Also, HTTPS can protect you from such hacker attacks as man-in-the-middle attacks, DNS rebinding, and replay attacks.
Further Security Measures
Dexguard offers String encryption according to their landing page. I've sent them a message and am awaiting how much this would cost for an indie developer.
Using a public/private API key exchange may be an additional layer of security. However, I need to research the implementation further to better understand this option.
I want to use EchoSign as a third party software to sign the contracts that my app generates using itext.
My app creates contracts, the way it works is the following:
The app creates the contract using iText
The app sends the contract to the approver
The approver logs in the app and sign the PDF by pressing an approval button.
The PDF is created again by the app but now including the approval.
The PDF is stored in the database.
We want to implement EchoSign to manage the approvals. So far I know that EchoSign provides an API to work with and I think that is possible to implement this in my app.
I have read so much stuff about EchoSign and seems that all the PDF's are stored and managed by EchoSign servers. We dont want to do that.
The question is: Does the app needs to rely on EchoSign servers availability to send and receive information from the created docs by the application?
Thanks in advance.
Yes, the app needs to rely on EchoSign servers because the PDF is signed using a private key owned by Adobe EchoSign. This private key is stored on a Hardware Security Module (HSM) on Adobe's side and is never transferred to the client (for obvious reasons).
You also depend on EchoSign servers because that's where the user management is done: EchoSign needs a trail to identify each user: credentials, IP-address, login-time,...
If you don't want to depend on an external server, you have two options:
each user owns a token or a smart card and uses that token or smart card to sign (for instance: in Belgium, every citizen owns an eID, which is an identity card with a chip that contains a couple of private keys)
you have a server with a HSM, you manage your users on that server and sign with the private key on the HSM.
Read more about this here: http://itextpdf.com/book/digitalsignatures