How does Adobe EchoSign work? - adobe

I want to use EchoSign as a third party software to sign the contracts that my app generates using itext.
My app creates contracts, the way it works is the following:
The app creates the contract using iText
The app sends the contract to the approver
The approver logs in the app and sign the PDF by pressing an approval button.
The PDF is created again by the app but now including the approval.
The PDF is stored in the database.
We want to implement EchoSign to manage the approvals. So far I know that EchoSign provides an API to work with and I think that is possible to implement this in my app.
I have read so much stuff about EchoSign and seems that all the PDF's are stored and managed by EchoSign servers. We dont want to do that.
The question is: Does the app needs to rely on EchoSign servers availability to send and receive information from the created docs by the application?
Thanks in advance.

Yes, the app needs to rely on EchoSign servers because the PDF is signed using a private key owned by Adobe EchoSign. This private key is stored on a Hardware Security Module (HSM) on Adobe's side and is never transferred to the client (for obvious reasons).
You also depend on EchoSign servers because that's where the user management is done: EchoSign needs a trail to identify each user: credentials, IP-address, login-time,...
If you don't want to depend on an external server, you have two options:
each user owns a token or a smart card and uses that token or smart card to sign (for instance: in Belgium, every citizen owns an eID, which is an identity card with a chip that contains a couple of private keys)
you have a server with a HSM, you manage your users on that server and sign with the private key on the HSM.
Read more about this here: http://itextpdf.com/book/digitalsignatures

Related

Secure Firebase REST APIs at a client level (not end user level)

I am trying to create a REST API (via Firebase cloud functions) and release it to my clients to allow them creating their mobile apps. The mobile apps they will be creating are used by public users. However, users are not supposed to deal with our APIs and thus authentication. So I don't need end user authentication. It's up to our clients (app makers) to use a "client id" and an "api key".
Based on what I have researched, Firebase Admin SDK might not be a good solution for this end since we're concerned about client level authentication.
I am looking for a standard solution to generate api-keys for the 3rd party clients. This key generation is not a manual process but rather a service that clients will use to obtain a key. Something like google map api for 3rd party developers. We want to keep track of whitelisted clients without needing their app users to deal with authentication.
I'd appreciate suggestions and guidelines to find the best solution for our REST APIs.
The first solution that comes to my mind is thew new Firebas App Check. It would restrict any access beside the Apps and Web pages you have whitelisted for your project. I don't know if that is possible in your usecase (how the cooperation with the other apps look like) but I would deffinitely try this first.

Is it safe to use private key for decoding a server message on client side?

Usually, this story goes like "client encrypts using public key - server decrypts using the very-safely-stored private key'.
Well, I have the opposite issue.
In a mobile app, I am using a web service library, whose API requires 2 secret keys from my personal account of that service. Anyone having access to these 2 secret keys can basically use them to call the same service's APIs as if it was from my app. So, I definitely do not want to embed those keys in the app as a decompilation might easily spoil them.
So I thought I'd store those keys server-side, and send them encrypted with a public key to the app. Then, the app would decrypt them using the private key stored in the app itself.
I know it's still not secure, but at least, a simple man-in-the-middle attack or a binary decompilation analysis will not scream "Service API keys here, come and get them!". The intention here is just to make it harder for someone to get a hold of those keys.
Do you think this would be a good idea? Do you have any other alternatives?
The secure way to handle you private keys is to keep them on the server and never release them to the client.
For each approved action create a server endpoint (e.g. an AWS Lambda). The server endpoint knows the private keys, but the app just knows where the endpoints are. This restricts the functionality to only what you approve, but the endpoints themselves can be discovered and could be used by other people without going via your app.
The endpoints can use some authentication such as JWT Bearer tokens (see https://www.jsonwebtoken.io/ ) to ensure they are only used by the application, but this requires server side knowledge of who is registered with the app.
Alternatively, if the private keys cannot be used for actions you do not want your application users taking, is it worth protecting them?
There are good reasons for aiming to make things hard but not secure, for example, the cost of creating all those endpoints I mentioned vs the risk of someone abusing the private keys. Unfortunately that means someone agreeing to a compromise and I can't advice you on the best compromise.

Asking the user for settings when enabling an Alexa Skill

I am working on an alexa skill which uses an external web service which requires an API key.
I can't find for the life of me where I can add this property in so that when the user enables the Alexa skill (I haven't got as far as publishing yet but I assume there is a property I can set somewhere as well for testing) they can add their API key and I receive this within my node.js lambda function and extract it and use it for my post request to the web service.
I know there is an Amazon Account Linking Service, but the web service I am using doesn't support this type of login I believe, their API is only accessed by sending a header containing the API key. Therefore I need a way for the user to be able to store somewhere their API key so I can then post this to the web service from the lambda code.
I'm not clear on how you expect the user to 'add their API key'.
The only built-in UI is the cards that your skill can push to a user but these are very limited and can't request information from the user.
Amazon does not show the user any sort of configurable settings for the skills.
And you have noted account-linking and that it does not address your needs.
So you could either ask the user to say the API key, which would be much too error prone unless it is unusually short, or you will need to direct the user (probably via a card) to your own website where they will provide their API key.

Is it OK to use Firebase RemoteConfig to store API Keys?

Note: For clarification this is not the Firebase API Key, this may be more like a token...something that the client app possesses, and the server endpoint verifies.
We are trying to do even better to secure an API Key (think token that is used to validate a client to an endpoint). This will all be on our internal network, but we still want to be sure that only our mobile client can call the endpoint.
I was thinking that we could put the API Key in a Firebase remote config parameter (with an invalid default value built into the app). However, the Firebase documentation for remote config says:
Don't store confidential data in Remote Config parameter keys or parameter values. It is possible to decode any parameter keys or values stored in the Remote Config settings for your project.
I wasn't sure if this is just referring to the default values that are bundled with the app, or if it is also for values that are loaded remotely. Once we have the key, we can encrypt it and store it on the device via our MDM provider.
Also, is the transfer of the remote config data to the app encrypted or done clear text?
Thanks for any more information that anyone can provide about the remote config.
It depends on how secure you want to keep your API Key. What does the API key allow someone to do? If it's simply to identify your app to another service, for example the YouTube Data API, then the worst that can happen is that a malicious user uses up your quota for that resource. On the other hand, if the key allows the holder to make some irreversible changes to important data without further authentication and authorization, then you never want it stored on their device in any form.
Your quote from the Firebase documentation answers your question. In general, you should not be storing private keys in your app. Check out the answers to this question for thorough explanations.
Using Firebase's Remote Config is hardly more secure than shipping keys in the app bundle. Either way, the data ends up on users' hardware. A malicious person can then theoretically access it, no matter how difficult we may think that is to do.
Also, I can't say for sure (you should be able to easily test this) but I HIGHLY doubt that remote config values are sent as plain text. Google does everything over https by default.
#Frank van Puffelen can confirm this, but from my understanding Firebase Remote Config uses HTTPS over HTTP requests which makes it harder to sniff information shared between the app and Firebase Remote Config vs. decompiling the APK and reading the strings generated if using string constants generated by Gradle build configurations. For instance, when one debugs an app with a Network Proxy sniffer such as Charles Proxy you can’t view the endpoint details unless the app is compiled in Debug mode due to HTTPs requests and newer security measures in the latest API versions.
See What makes "https" sites more secure than "http"?.
HTTP protocol doesn’t use data encryption when transferring it, so your personal information can be intercepted or even manipulated by third parties. To capture network information (passwords, credit card numbers, users IDs, etc.) hackers use a method called “sniffing”. If network packets aren’t encrypted the data within them can be read and stolen with a help of hacker application.
Alternatively, HTTPS keeps any kind of data, including passwords, text messages, and credit card details, safe during transits between your computer and the servers. HTTPS keeps your data confidential by using the TSL protocol, frequently referred to as SSL, a secure certificate which offers three layers of protection, such as encryption, data integrity, and authentication.SSL certificates use what is known as asymmetric Public Key Cryptography, or a Public Key Infrastructure (PKI) system. A PKI system uses two different keys to encrypt communications: a public key and a private key. Anything that is encrypted with the public key can only be decrypted by the corresponding private key and vice-versa.Also, HTTPS can protect you from such hacker attacks as man-in-the-middle attacks, DNS rebinding, and replay attacks.
Further Security Measures
Dexguard offers String encryption according to their landing page. I've sent them a message and am awaiting how much this would cost for an indie developer.
Using a public/private API key exchange may be an additional layer of security. However, I need to research the implementation further to better understand this option.

How to ensure HTTP upload came from authentic executable

We are in the process of writing a native windows app (MFC) that will be uploading some data to our web app. Windows app will allow user to login and after that it will periodically upload some data to our web app. Upload will be done via simple HTTP POST to our web app. The concern I'm having is how can we ensure that the upload actually came from our app, and not from curl or something like that. I guess we're looking at some kind of public/private key encryption here. But I'm not sure if we can somehow just embed a public key in our win app executable and be done with it. Or would that public key be too easy to extract and use outside of our app?
Anyway, we're building both sides (client and server) so pretty much anything is an option, but it has to work through HTTP(S). However, we do not control the execution environment of win (client) app, plus the user that is running the app on his/her system is the only one that stands to gain something by gaming the system.
Ultimately, it's not possible to prove the identity of an application this way when it's running on a machine you don't own. You could embed keys, play with hashes and checksums, but at the end of the day, anything that relies on code running on somebody else's machine can be faked. Keys can be extracted, code can be reverse-engineered- it's all security through obscurity.
Spend your time working on validation and data cleanup, and if you really want to secure something, secure the end-user with a client certificate. Anything else is just a waste of time and a false sense of security.
About the best you could do would be to use HTTPS with client certificates. Presumably with WinHTTP's interface.
But I'm not sure if we can somehow just embed a public key in our win app executable and be done with it.
If the client is to be identifying itself to the server, it would have to be the private key embedded.
Or would that be too easy to extract and use outside of our app?
If you don't control the client app's execution environment, anything your app can do can be analysed, automated and reproduced by an attacker that does control that environment.
You can put obfuscatory layers around the communications procedure if you must, but you'll never fix the problem. Multiplayer games have been trying to do this for years to combat cheating, but in the end it's just an obfuscation arms race that can never be won. Blizzard have way more resources than you, and they can't manage it either.
You have no control over the binaries once your app is distributed. If all the signing and encryption logic reside in your executable it can be extracted. Clever coders will figure out the code and build interoperable systems when there's enough motivation to do so. That's why DRM doesn't work.
A complex system tying a key to the MAC address of a PC for instance is sure to fail.
Don't trust a particular executable or system but trust your users. Entrust each of them with a private key file protected by a passphrase and explain to them how that key identify them as submitters of contents on your service.
Since you're controlling the client, you might as well embed the key in the application, and make sure the users don't have read access to the application image - you'll need to separate the logic to 2 tiers - 1 that the user runs, the other that connects to the service over HTTP(S) - since the user will always have read access to an application he's running.
If I understand correctly, the data is sent automatically after the user logs on - this sounds like only the service part is needed.

Resources