App Store Connect Export Compliance for StoreKit 1 and 2 - encryption

I use both StoreKit 1 and 2 in my app where I use TPInAppReceipt (uses ASN1) to decrypt the App Store Receipt and StoreKit 2 uses JWS to verify purchases. Considering I would like to make my app available in France, do I have to fill out the ANSSI form as encryption is only used for local verification? Judging by the contents of the form, it seems to apply to encrypted transmission between devices.

Related

Unattended/automated Linux device key management (certs for accessing update servers)

I am currently working on a customized media center/box product for my employer. It's basically a Raspberry Pi 3b+ running Raspian, configured to auto-update periodically via apt. The device accesses binaries for proprietary applications via a private, secured apt repo, using a pre-installed certificate on the device.
Right now, the certificate on the device is set to never expire, but going forward, we'd like to configure the certificate to expire every 4 months. We also plan to deploy a unique certificate per device we ship, so the certs can be revoked (i.e. in case the customer reports the device as stolen).
Is there a way, via apt or OpenStack/Barbican KMS to:
Update the certs for apt-repo access on the device periodically.
Setup key-encryption-keys (KEK) on the device, if we need the device to be able to download sensitive data, such as an in-memory cached copy of customer info.
Provide a mechanism for a new key to be deployed on the device if the currently-used key has expired (i.e. the user hasn't connected the device to the internet for more than 4 months). Trying to wrap my head around this one, since the device now (in this state) has an expired certificate, and I can't determine how to let it be trusted to pull a new one.
Allow keys to be tracked, revoked, and de-commissioned.
Thank you.
Is there a way I could use Barbican to:
* Update the certs for apt-repo access on the device periodically.
Barbican used to have an interface to issue certs, but this was
removed. Therefore barbican is simply a service to generate and store
secrets.
You could use something like certmonger. certmonger is a client side
daemon that generates cert requests and submits them to a CA. It then
tracks those certs and requests new ones when the certs are going to
expire.
Setup key-encryption-keys (KEK) on the device, if we need the
device to be able to download sensitive data, such as an in-memory
cached copy of customer info.
To use barbican, you need to be able to authenticate and retrieve
something like a keystone token. Once you have that, you can use
barbican to generate key encryption keys (which would be stored in the
barbican database) and download them to the device using the secret
retrieval API.
Do you need/want the KEK's escrowed like this though?
Provide a mechanism for a new key to be deployed on the device if
the currently-used key has expired (i.e. the user hasn't connected
the device to the internet for more than 4 months).
Barbican has no mechanism for this. This is client side tooling that
would need to be written. You'd need to think about authentication.
Allow keys to be tracked, revoked, and de-commissioned.
Same as above. Barbican has no mechanism for this.

Google Cloud IoT Core Device registration- can multiple devices use the same Public/Private Key

I am working on an in house project with several sensor devices. I do not want a user to register each and every device individually. In the sense I want to use the same Public/Private Key pair for all devices registering to a registry but be able to pass device information on to pubsub via mqtt/http with unique device information like name or id. Is it possible to achieve that?
I am assuming if i am using the same Keys. I am registering all devices as one but is it possible to send device info as part of the message being published. does doing that inhibit the usage of google's inbuilt functionality in any way like API's.
new to cloud technologies, any thoughts/suggestions would help.
Depends on MQTT-broker configuration.
Normally Certificate based authorization is used only for authorisation on MQTT-Broker side. So you can use Public/Private Key pair to authorize and connect to the broker and use MQTT ClientID to differ between your devices.
MQTT-Broker can be also configured to use Identity from authorization Public/Private Key pair as Username.
use_identity_as_username true
In this case, if MQTT-Broker has also username based ACL configuration for example like that:
#device info sent from device. %u <- username
pattern readwrite %u/devinfo
All your devices will publish messages under same username, you should set different ClientID for each device or use CleanSession Flag in this case.
Here is a good reading to understand how the connection between device and broker works at all: https://www.hivemq.com/blog/mqtt-essentials-part-3-client-broker-connection-establishment/
Sounds like you really want to be using the new gateway functionality (it's in beta now, but I've run through using it a bunch and it's quite stable).
Check out this tutorial on gateways to get an idea of what we're talking about:
https://cloud.google.com/community/tutorials/cloud-iot-gateways-rpi
TL;DR version is that it allows a single device to manage many smaller devices (which may not be capable of auth on their own) but still have those smaller devices be represented in the Cloud.
So basically, you have a more powerful device (like a Raspberry Pi, or a desktop machine, whatever) registered in IoT Core as a "Gateway". Then when you create the individual devices in the Cloud, you don't specify an SSL key (the Console will warn you about the device not being able to connect) and then you can "associate" the device with the gateway, and it'll handle the auth piece for you. Then the individual devices instead of calling out to the internet, connect and talk to the gateway device locally.

Is it good idea to store sensitive info in firebase?

In my Android application I have an idea to store in database some serial key. If user enters correct key he gets full version of application and the key is disabled on the server to avoid multiply usage of the same key, otherwise he can buy app in Google Play without a key.
For this I thought to use Firebase Database but after read this I have some doubts
Firebase Realtime Database
Store and sync data with our NoSQL cloud database. Data is synced across all clients in realtime, and remains available when your app goes offline.
Does it mean that firebase will duplicate the table with all available keys to all application users and some smart user can read the list from this copy at his phone?
Not all data is automatically duplicated to all clients. Only data that the client subscribes to is received by that client.
You can control what data each client can see through Firebase's server-side security rules. For example, you'll typically want to ensure that each user can only read their own data.
It probably isn't a good idea to store super-sensitive data like social security numbers or credit card numbers, but if you see https://firebase.google.com/docs/database/security/ you can see, that you can control access to data, & use validation, especially since you can regenerate the keys if they become compromised, it wouldn't be the worst option. If you look at https://firebase.google.com/docs/database/security/user-security you can see, that it's possible to write an app that uses it like google drive with a smartphone-based client.
Personally the answer would no. You may want to think about Google Play Subscriptions and In-App Purchases.
If you really have to then:
Create a key as a user buys the upgrade (server-side).
Store the device id/account id (hashed) and timestamp with the key.
Credit card details and expiry dates should be combined into one hash.
Just encrypt everything.
It's better to have a banned list than a list of approved key. Eventually you have to create more keys and it's easier just to maintain a list of banned keys.

Meteor JS - sharing Collections among distinct clients (like Admin portal vs Consumer portal)

A few similar (but not same) questions were asked 2 years ago but not 100% answered..
(Segmented Meteor App(s) - loading only half the client or two apps sharing a database ;
Meteor: Different collections, different databases)
Since Meteor has changed quite a bit, was wondering if there was a better way of doing the following (i know about roles, publish subscribe etc):
Simple example: Say I have a restaurant ordering app with 2 portals:
(1) Consumer side, with accounts, and a form for your food order & pay for it w/ credit card (assume # is stored, not using Stripe etc).
(2) Admin side, with accounts, for the restaurant to manage incoming orders & track payments, see credit card numbers.
Assuming more complexity + very high security requirements, would this be structured as 1 monolithic meteor app? Or is there a standard way to break it into 2 (like traditional MVC frameworks, you might have 3 DBs - 1 Consumer DB, 1 Admin DB, 1 DB for shared sensitive data like credit card numbers - and 2 SPA clients). Breaking it into 2 would be preferable for the following reasons:
(1) different account types for the 2 portals - e.g. admins require 2FA. I also actually prefer to have separate DBs for security & backup precautions.
(2) useful for code management/distribution purposes
(3) also so we don't have to send all the Admin templates to the Consumer.
I think you would be able to use two Meteor apps accessing the same Mongo database.

How does Adobe EchoSign work?

I want to use EchoSign as a third party software to sign the contracts that my app generates using itext.
My app creates contracts, the way it works is the following:
The app creates the contract using iText
The app sends the contract to the approver
The approver logs in the app and sign the PDF by pressing an approval button.
The PDF is created again by the app but now including the approval.
The PDF is stored in the database.
We want to implement EchoSign to manage the approvals. So far I know that EchoSign provides an API to work with and I think that is possible to implement this in my app.
I have read so much stuff about EchoSign and seems that all the PDF's are stored and managed by EchoSign servers. We dont want to do that.
The question is: Does the app needs to rely on EchoSign servers availability to send and receive information from the created docs by the application?
Thanks in advance.
Yes, the app needs to rely on EchoSign servers because the PDF is signed using a private key owned by Adobe EchoSign. This private key is stored on a Hardware Security Module (HSM) on Adobe's side and is never transferred to the client (for obvious reasons).
You also depend on EchoSign servers because that's where the user management is done: EchoSign needs a trail to identify each user: credentials, IP-address, login-time,...
If you don't want to depend on an external server, you have two options:
each user owns a token or a smart card and uses that token or smart card to sign (for instance: in Belgium, every citizen owns an eID, which is an identity card with a chip that contains a couple of private keys)
you have a server with a HSM, you manage your users on that server and sign with the private key on the HSM.
Read more about this here: http://itextpdf.com/book/digitalsignatures

Resources