Google Cloud IoT Core Device registration- can multiple devices use the same Public/Private Key - google-cloud-iot

I am working on an in house project with several sensor devices. I do not want a user to register each and every device individually. In the sense I want to use the same Public/Private Key pair for all devices registering to a registry but be able to pass device information on to pubsub via mqtt/http with unique device information like name or id. Is it possible to achieve that?
I am assuming if i am using the same Keys. I am registering all devices as one but is it possible to send device info as part of the message being published. does doing that inhibit the usage of google's inbuilt functionality in any way like API's.
new to cloud technologies, any thoughts/suggestions would help.

Depends on MQTT-broker configuration.
Normally Certificate based authorization is used only for authorisation on MQTT-Broker side. So you can use Public/Private Key pair to authorize and connect to the broker and use MQTT ClientID to differ between your devices.
MQTT-Broker can be also configured to use Identity from authorization Public/Private Key pair as Username.
use_identity_as_username true
In this case, if MQTT-Broker has also username based ACL configuration for example like that:
#device info sent from device. %u <- username
pattern readwrite %u/devinfo
All your devices will publish messages under same username, you should set different ClientID for each device or use CleanSession Flag in this case.
Here is a good reading to understand how the connection between device and broker works at all: https://www.hivemq.com/blog/mqtt-essentials-part-3-client-broker-connection-establishment/

Sounds like you really want to be using the new gateway functionality (it's in beta now, but I've run through using it a bunch and it's quite stable).
Check out this tutorial on gateways to get an idea of what we're talking about:
https://cloud.google.com/community/tutorials/cloud-iot-gateways-rpi
TL;DR version is that it allows a single device to manage many smaller devices (which may not be capable of auth on their own) but still have those smaller devices be represented in the Cloud.
So basically, you have a more powerful device (like a Raspberry Pi, or a desktop machine, whatever) registered in IoT Core as a "Gateway". Then when you create the individual devices in the Cloud, you don't specify an SSL key (the Console will warn you about the device not being able to connect) and then you can "associate" the device with the gateway, and it'll handle the auth piece for you. Then the individual devices instead of calling out to the internet, connect and talk to the gateway device locally.

Related

OpenStack Cluster Event Notification

So far, based on my understanding of OpenStack Python SDK, I am able to read the Hypervisor, Servers instances, however, I do not see an API to receive and handle the change notification/events for the operations that happens on the cluster e.g. A new VM is added, an existing VM is deleted etc.
There is a similar old post (circa 2016) and I am curious if there have been any changes in Notification handling?
Notifications to external systems from openstack
I see a documentation, which talks about emitting notifications over a message bus that indicate different events that occur within the service.
https://docs.openstack.org/ironic/latest/admin/notifications.html
I have the following questions:
Does Openstack Python SDK support notification APIs?
How do I receive/monitor notifications for VM related changes?
How do I receive/monitor notifications for compute/hypervisor related changes?
How do I receive/monitor notifications for Virtual Switch related changes?
I see other posts such as Notifications in openstack and they recommend to use Ceilometer project, which uses a Database. Is there more light-weight solution than using a completely different service like Ceilometer?
Thanks in advance for your help in this regard.
As far as I see and I know, Openstack SDK doesn't provide such a function.
Ceilometer will also not help you. It only collects data by polling and by notifications over RPC. You would still have to poll the data from ceilometer by yourself. Beside this, ceilometer alone has the problem, that it only grow and will blow up your database, that's why you should also use gnocchi, when you use ceilometer.
At the moment I see only the 3 possible solutions for you:
Write your own tool, which runs permanently in the background and collect the data in a regular interval over OpenstackSDK and REST-API requests.
Write something, which does the same like ceilometer by reciving notifications over oslo-messaging (RPC). See the oslo_messaging_notifications-section in the configs: https://docs.openstack.org/ocata/config-reference/compute/config-options.html#id35 (neutron has also such an option) and use messagingv2 as driver like ceilometer does. But be aware here, that not every event creates a notification. The list of the ceilometer meter-data should give a good overview of which event are creating a notification and what can only be collected by polling: https://docs.openstack.org/ceilometer/pike/admin/telemetry-measurements.html. The number of notification-events is really low, so its possible, that it doesn't provides all events you want.
Use in the oslo_messaging_notifications-section in the configs log as driver to write the notification in a log-file, and write a simple program to read the log-file and process or forward the read content. Here is the same problem like in number 2, that not every event creates a notification (log-entry in this case here). This has also the problem, that the notifications and so also the event-logs, are created on the compute-nodes (as far as I know) and so you would have to watch all compute-nodes by your tool.
Based on the fact, that I don't know, how much work it would be to write a tool to collect notifications over RPC and because I don't know, if all events you want to watch really creates a notification (base on the overview here: https://docs.openstack.org/ceilometer/pike/admin/telemetry-measurements.html), I would prefer number 1.
Its the easiest way to create a tool, which runs GET-Requests over REST-API in a regular interval and forward the results to the desired destination as your own custom notifications.
I followed the below references to get this working. Also, chatted with the author of this code and video.
https://github.com/gibizer/nova-notification-demo/blob/master/ws_forwarder.py
https://www.youtube.com/watch?v=WFq5JWXa9AM
In addition, I faced other issues:
By default, OpenStack server would not allow you to connect to RabbitMQ bus from remote host because of an IPTABLE rule. You will have to enable access to the RabbitMQ Port in the IP table.

FCM Send message to multiple devices - which way to use?

There are several ways to send FCM messages and I don't understand which one to use even if I read the documents over and over.
My use case:
I have a user that is logged into the app on her iPhone, iPad and Android tablet. I want to make sure the push notifications are sent to all of her devices.
There are three ways to do this (I'm using the admin to send the push notifications):
Topic messaging (messaging().sendToTopic): My understanding this is for sending the same notification to larger groups. For example a topic could be all users that want weather notifications for New York. Not for my use case.
Send Multicast (messaging().sendMulticast([array of tokens]): My understanding this will send the same message to all device tokens in the array (up to 100). This seems perfect for my use case, right?
Device group messaging (messaging().sendToDeviceGroup): I don't really understand this one. Why use this instead of multicast or topics?
For my use case it seems the best way is to each time a FCM token is updated I add this token to the user in the database. I then send the notification using the admin.messaging().sendMulticast([array of tokens]). Is this what you would recommend? Why or why not?
You probably want to use multicast if you're targeting an individual user's specific devices. You probably don't want to use device groups. I suggest reading the documentation to understand how device groups work:
Device group messaging allows you to add multiple devices to a single group. This is similar to topic messaging, but includes authentication to ensure that group membership is managed only by your servers. For example, if you want to send different messages to different phone models, your servers can add/remove registrations to the appropriate groups and send the appropriate message to each group. Device group messaging differs from topic messaging in that it involves managing device groups from your servers instead of directly within your application.
Device groups are basically just topics whose membership is fully managed by your server code, not by client code.

Unattended/automated Linux device key management (certs for accessing update servers)

I am currently working on a customized media center/box product for my employer. It's basically a Raspberry Pi 3b+ running Raspian, configured to auto-update periodically via apt. The device accesses binaries for proprietary applications via a private, secured apt repo, using a pre-installed certificate on the device.
Right now, the certificate on the device is set to never expire, but going forward, we'd like to configure the certificate to expire every 4 months. We also plan to deploy a unique certificate per device we ship, so the certs can be revoked (i.e. in case the customer reports the device as stolen).
Is there a way, via apt or OpenStack/Barbican KMS to:
Update the certs for apt-repo access on the device periodically.
Setup key-encryption-keys (KEK) on the device, if we need the device to be able to download sensitive data, such as an in-memory cached copy of customer info.
Provide a mechanism for a new key to be deployed on the device if the currently-used key has expired (i.e. the user hasn't connected the device to the internet for more than 4 months). Trying to wrap my head around this one, since the device now (in this state) has an expired certificate, and I can't determine how to let it be trusted to pull a new one.
Allow keys to be tracked, revoked, and de-commissioned.
Thank you.
Is there a way I could use Barbican to:
* Update the certs for apt-repo access on the device periodically.
Barbican used to have an interface to issue certs, but this was
removed. Therefore barbican is simply a service to generate and store
secrets.
You could use something like certmonger. certmonger is a client side
daemon that generates cert requests and submits them to a CA. It then
tracks those certs and requests new ones when the certs are going to
expire.
Setup key-encryption-keys (KEK) on the device, if we need the
device to be able to download sensitive data, such as an in-memory
cached copy of customer info.
To use barbican, you need to be able to authenticate and retrieve
something like a keystone token. Once you have that, you can use
barbican to generate key encryption keys (which would be stored in the
barbican database) and download them to the device using the secret
retrieval API.
Do you need/want the KEK's escrowed like this though?
Provide a mechanism for a new key to be deployed on the device if
the currently-used key has expired (i.e. the user hasn't connected
the device to the internet for more than 4 months).
Barbican has no mechanism for this. This is client side tooling that
would need to be written. You'd need to think about authentication.
Allow keys to be tracked, revoked, and de-commissioned.
Same as above. Barbican has no mechanism for this.

Is it OK to use Firebase RemoteConfig to store API Keys?

Note: For clarification this is not the Firebase API Key, this may be more like a token...something that the client app possesses, and the server endpoint verifies.
We are trying to do even better to secure an API Key (think token that is used to validate a client to an endpoint). This will all be on our internal network, but we still want to be sure that only our mobile client can call the endpoint.
I was thinking that we could put the API Key in a Firebase remote config parameter (with an invalid default value built into the app). However, the Firebase documentation for remote config says:
Don't store confidential data in Remote Config parameter keys or parameter values. It is possible to decode any parameter keys or values stored in the Remote Config settings for your project.
I wasn't sure if this is just referring to the default values that are bundled with the app, or if it is also for values that are loaded remotely. Once we have the key, we can encrypt it and store it on the device via our MDM provider.
Also, is the transfer of the remote config data to the app encrypted or done clear text?
Thanks for any more information that anyone can provide about the remote config.
It depends on how secure you want to keep your API Key. What does the API key allow someone to do? If it's simply to identify your app to another service, for example the YouTube Data API, then the worst that can happen is that a malicious user uses up your quota for that resource. On the other hand, if the key allows the holder to make some irreversible changes to important data without further authentication and authorization, then you never want it stored on their device in any form.
Your quote from the Firebase documentation answers your question. In general, you should not be storing private keys in your app. Check out the answers to this question for thorough explanations.
Using Firebase's Remote Config is hardly more secure than shipping keys in the app bundle. Either way, the data ends up on users' hardware. A malicious person can then theoretically access it, no matter how difficult we may think that is to do.
Also, I can't say for sure (you should be able to easily test this) but I HIGHLY doubt that remote config values are sent as plain text. Google does everything over https by default.
#Frank van Puffelen can confirm this, but from my understanding Firebase Remote Config uses HTTPS over HTTP requests which makes it harder to sniff information shared between the app and Firebase Remote Config vs. decompiling the APK and reading the strings generated if using string constants generated by Gradle build configurations. For instance, when one debugs an app with a Network Proxy sniffer such as Charles Proxy you can’t view the endpoint details unless the app is compiled in Debug mode due to HTTPs requests and newer security measures in the latest API versions.
See What makes "https" sites more secure than "http"?.
HTTP protocol doesn’t use data encryption when transferring it, so your personal information can be intercepted or even manipulated by third parties. To capture network information (passwords, credit card numbers, users IDs, etc.) hackers use a method called “sniffing”. If network packets aren’t encrypted the data within them can be read and stolen with a help of hacker application.
Alternatively, HTTPS keeps any kind of data, including passwords, text messages, and credit card details, safe during transits between your computer and the servers. HTTPS keeps your data confidential by using the TSL protocol, frequently referred to as SSL, a secure certificate which offers three layers of protection, such as encryption, data integrity, and authentication.SSL certificates use what is known as asymmetric Public Key Cryptography, or a Public Key Infrastructure (PKI) system. A PKI system uses two different keys to encrypt communications: a public key and a private key. Anything that is encrypted with the public key can only be decrypted by the corresponding private key and vice-versa.Also, HTTPS can protect you from such hacker attacks as man-in-the-middle attacks, DNS rebinding, and replay attacks.
Further Security Measures
Dexguard offers String encryption according to their landing page. I've sent them a message and am awaiting how much this would cost for an indie developer.
Using a public/private API key exchange may be an additional layer of security. However, I need to research the implementation further to better understand this option.

How does Adobe EchoSign work?

I want to use EchoSign as a third party software to sign the contracts that my app generates using itext.
My app creates contracts, the way it works is the following:
The app creates the contract using iText
The app sends the contract to the approver
The approver logs in the app and sign the PDF by pressing an approval button.
The PDF is created again by the app but now including the approval.
The PDF is stored in the database.
We want to implement EchoSign to manage the approvals. So far I know that EchoSign provides an API to work with and I think that is possible to implement this in my app.
I have read so much stuff about EchoSign and seems that all the PDF's are stored and managed by EchoSign servers. We dont want to do that.
The question is: Does the app needs to rely on EchoSign servers availability to send and receive information from the created docs by the application?
Thanks in advance.
Yes, the app needs to rely on EchoSign servers because the PDF is signed using a private key owned by Adobe EchoSign. This private key is stored on a Hardware Security Module (HSM) on Adobe's side and is never transferred to the client (for obvious reasons).
You also depend on EchoSign servers because that's where the user management is done: EchoSign needs a trail to identify each user: credentials, IP-address, login-time,...
If you don't want to depend on an external server, you have two options:
each user owns a token or a smart card and uses that token or smart card to sign (for instance: in Belgium, every citizen owns an eID, which is an identity card with a chip that contains a couple of private keys)
you have a server with a HSM, you manage your users on that server and sign with the private key on the HSM.
Read more about this here: http://itextpdf.com/book/digitalsignatures

Resources