How to send token in CoAP requests? - http

Is there a consensus on how to send tokens as part of CoAP requests?
HTTP clients share their identity/credentials through the Authorization header (or eventually the Cookie header) or less often as part of the URL (e.g. a JSON Web Token encoded in base64url). In the CoAP RFC, there is not equivalent option to Authorization.
Draft profiles for authentication and authorization for constrained environments (e.g. OSCORE or DTLS) rely on the underlying security protocol to maintain a stateful connection and the token is uploaded to /authz-info once:
C RS AS
| [-- Resource Request --->] | |
| | |
| [<---- AS Request ------] | |
| Creation Hints | |
| | |
| ----- POST /token ----------------------------> |
| | |
| <---------------------------- Access Token ----- |
| + Access Information |
| ---- POST /authz-info ---> | |
| (access_token, N1) | |
| | |
| <--- 2.01 Created (N2) --- | |
However, this is not suitable for unprotected environment (e.g. for development or in a private network). Since CoAP is typically on top of UDP, it is not possible to have "sessions" so we cannot track which client uploaded which token. I'd like to avoid to discuss about usefulness of token in unprotected context.
Should the token be part of the URI (i.e. as a Uri-Query option or in the Uri-Path)? ThingsBoard is an example of service having the the token as part of the Uri-Path but I'm not sure if this is the best option. Also, would sending a binary CBOR Web Token in Uri-Query be possible without base64url encoding?

In CoAP, no attempt is made to provide user authentication without a security layer in place, nor are there provisions for sessions in the CoAP layer (which would violate the REST design).
As I understand, that was a conscious decision to guide implementers towards a RESTful design, and to avoid the transfer of security tokens over unsecured transports.
For the two use case of "during development", it is possible to introduce an own CoAP option in the experimental-use range, for example "65003 = Simulated-OSCORE-Key-ID" (where 65003 is chosen to be critical and not safe to forward). While the OSCORE stack is not operational, an ad-hoc made-up OSCORE key ID could be inserted there and then have a token POSTed using it in the ACE steps. Obviously, those constructs should neither make their way out of a test setup, and not into specifications.
Private networks are considered as untrusted ("NoSec mode") as the rest of the network in CoAP. That assumption can probably be understood better in the context of the principle of least authority, which makes setups where all devices (and processes) on a network share the same set of permissions highly unlikely.

Related

How to design a repository writing to pub/sub and reading from database in clean architecture

I want to design my application which requires writing asynchronously through pub/sub and reading synchronously from the database. How do you implement this in clean architecture? There's a separate process reading from the pub/sub and writing to the database.
I'm thinking of a design like this; however, I wasn't sure if it's a good design for one repository to be implemented using separate persistent infrastructure for reading and writing.
Repository | Controller | Infrastructure
my repository ---> my controller ---> (write) pub/sub
\-> (read) database
An alternative could be CQRS, but in my case, I use the same model for reading and writing, so I think it's better to use a single model.
Background:
Writes to my application are very elastic, while reads are consistent. So I avoid getting my service overloaded by writing asynchronously.
Thanks!
It the same as with any other clean architecture application.
+------------+ || +----------+ +------------+
| Controller | ---> | Use Case | ---> | Repository |
+------------+ || +----------- +------------+
|| | save() |
|| | find() |
|| +------------+
|| ^
============================================|==========
|
+------------+
| RepoImpl |
+------------+
| save() | ---> write to pub/sub
| find() | ---> read from db
+------------+
Maybe you split up the repository writes and reads into different interfaces, because you might have a use case that only reads and another that only writes.
I usually apply the interface segregation principle and define use case specific interfaces.

How to send notifications using FCM after every new entry in Firebase Real-time database?

I have a group messaging app that makes use of Firebase-Real time database. I am trying devise a way that whenever a user sends a message all the group members receive notification of message.
Following is the structure of my database.
plan-129a0
|
-- plan
|
-- LVAMCUC8S0S6tuTtLjk
|
-- exp_duration: "same_day"
|
-- members
|
-- 0: member_id
|
-- 1: member_id
|
-- messages
|
-- LVAMDqIHDrTDeTUrfkM
|
-- msgTime:
|
-- name:
|
-- text: "Hello World"
|
-- name: "Plan one"
|
-- plan_admin:
|
-- timeCreated: "Wed"
user
|
-- LT46t95CKQ9dFgXv-JF
|
-- name:
|
-- number:
|
-- uid:
I am trying to send a notification every time there is a new entry in the messages node to every user that is in the members node. My question is that can Firebase Cloud Functions achieve this task?
Some of posts that I have gone through on how to implement this state that I need a token for every device in order send them notification. What is this token and how do I get this in Android for every device that registers with my app?
So it sounds like you want to send messages to all members of a group. This means you must somehow know what devices the members of the group are using the app on. Each device that the app is installed on is identified by a so-called App Instance ID (also often referred to as an FCM token), so you essentially need to map these tokens/IDs to your application's groups.
You have two options for doing this with FCM:
Use a topic to identify each group
This means that the app should subscribe to the topic for each group that the user is a member of. And then to send a message to the group, you call the FCM API to send a message to that group's topic.
In this option, FCM manages the relationship between the groups and tokens for you, and it handles the expansion of the group to the list of tokens.
Keep track of what app instances related to each group
This means that you need to get the App Instance IDs/FCM tokens for each user and store them in your own database, keeping in mind that there may be multiple, and that they may expire. Then when you want to send a message to a group, you look up all IDs/tokens for that group, and call the FCM API to send a message to a list of tokens.
In this option, you are managing the relationship between groups and tokens yourself. This allows you more flexibility, but also means you have to do more work in managing the data.

Connection Active Google Cloud SQL

I have setup wordpress on Google Cloud SQL and Google Cloud.
I have only 2 minimal plugins setup Gapps engine and Batchache manager
There are no posts, no other plugins.
I have no visitors or followers.
My issue is I still have an active connection with Google Coud SQL which is charged at 0.025 per hour.
I have 1 connection which always stays on.
This connection I would like to close when not in use.
These are my full process list
+----+------+---------------+------+---------+------+
| Id | User | Host | db | Command | Time | State | Info |1
| 12 | root | 27.32.---.--- | NULL | Query | 0 | NULL | show full processl ist
| +----+------+---------------+------+---------+------+-------+------------------- ----+ 1 row in set (0.33 sec) –
WordPress on App Engine is optimized for high traffic sites, so batcache keeps Cloud SQL alive.
You can tune that. See https://wordpress.org/plugins/google-app-engine/other_notes/ or search for "optimize wordpress batcache on app engine".

How do you find the current platform in robotframework?

I'm trying to find a Keyword or automatic variable that contains the current platform that the robot framework suite is being run on.
I assume it must know to allow it to access the file system.
I wanted to use this to load variable resources depending on the current platform
You can use the Evaluate keyword to get the platform from python using sys.platform:
*** Test Cases ***
| Example which logs the current platform
| | ${platform}= | Evaluate | sys.platform | sys
| | log | ${platform}
The exact values that are returned are documented in the sys.platform documentation.
For more fine-grained information, such as processor type, you can use the platform module in a similar manner.

How can I get and store the user ID from the URL in a Selenium test?

I'm recording Selenium tests using Selenium IDE to test the registration flow of my drupal site, which depends heavily on the rules module.
Some of the tests involve the registration of user accounts. Since I will be using these tests on multiple servers with different amounts of users, I do not know upon starting the test which user ID to check for. Since the user ID is in the URL, I was hoping to grab it and store it in Selenium.
Upon logging in, users are redirected to a URL like http://192.168.100.100:8888/en/user/6, where "6" is the UID.
I imagine that I could use Selenium's storeValue command to do this, but what should I put as the target to pull the user ID out of the URL?
store | http://192.168.100.100:8888/en/user/6 | string
store | 1 | delimiter
store | javascript{storedVars['string'].split('user/')[storedVars['delimiter']]} | result
echo | ${result}
Or
storeLocation | string
store | 1 | delimiter
store | javascript{storedVars['string'].split('user/')[storedVars['delimiter']]} | result
echo | ${result}
Result will be 6
You could do this by grabbing the url and parsing the string for the user ID if you are using RC or Selenium 2.
For specific help you'll have to shed some light on what version of Selenium you are using (IDE, Selenium RC, Selenium 2/Webdriver) and what language?
If you have a sample URL string you can get a more exact answer as well.

Resources