How to use the `DBA` user to sign in the drainer in the primary master of the nebulagraph database? - nebula-graph

I use NebulaGraph to synchronize date between clusters.
In the master cluster, I cannot use the DBA user to perform the sign in drainer service command.
(uuu_dba#nebula) [basketballplayer]> sign in drainer service(192.168.15.2:9889)
[ERROR (-1008)]: PermissionError: No permission to write space.
Tue, 03 Jan 2023 10:26:33 CST
(uuu_dba#nebula) [basketballplayer]> show roles in basketballplayer
+-----------+-----------+
| Account | Role Type |
+-----------+-----------+
| "uuu_dba" | "DBA" |
+-----------+-----------+
Can I use the DBA user to register the drainer service in Nebula Graph?

Related

What is the best way to store history of data in Firebase Realtime Database?

There is an app I'm building using Firebase Realtime Database. Users of this app will be able to create a post and edit it anytime they want. Also, they're allowed to access all versions of the post, but they'll get the latest version by default. I've got this so far:
Realtime Database-root
|
--- posts
|
--- postId
|
--- versionId
| |
| --- uid
| |
| --- title
| |
| --- date
|
--- versionId
|
--- uid
|
--- title
|
--- date
However, I'm afraid that this is the best way to go. Could this approach be improved considering cost, scalability, security and performance?
Since the RTDB costs $5 per GB stored I would recommend that you use Google Storage as a cheap datastore to store your versions.
What that means is your database should store posts like this:
Realtime Database-root
|
--- posts
|
--- postId
|
--- uid
|
--- title
|
--- date
--- postsVersions
|
--- postId
|
--- version1Id: true
|
--- version2Id: true
version1Id and version2Id reference JSON documents stored in Google Storage.
Workflow
User creates the initial post and the database entry is made.
User makes changes and saves them to the RTDB, overwriting the previous version in /posts.
A Cloud Function that is triggered on the onUpdate() event on /posts/{postId} takes the previous version and saves it to Google Storage as https://storage.google.com/<bucket>/<postId>/<version1Id>.json and saves that versionId to /postsVersions.
If the user deletes the version1Id key from that /postsVersions, a Cloud Function monitoring the onDelete() event on /postsVersions/{postId} should delete that JSON file from Google Storage.
Finally if the user would like to load a previous version of a post, they make a simple HTTPS request to retrieve the JSON file from Google Storage and then they update /posts with the previous version.
Security
You should make sure that your version IDs are unguessable and non-sequential and you can then simply provide blanket access to the bucket using IAM permissions for all authenticated users. For more granular control where users can't potentially access other users' versioned posts, a non-public bucket and an API endpoint that generates a signedURL would be simple to implement.

How to send token in CoAP requests?

Is there a consensus on how to send tokens as part of CoAP requests?
HTTP clients share their identity/credentials through the Authorization header (or eventually the Cookie header) or less often as part of the URL (e.g. a JSON Web Token encoded in base64url). In the CoAP RFC, there is not equivalent option to Authorization.
Draft profiles for authentication and authorization for constrained environments (e.g. OSCORE or DTLS) rely on the underlying security protocol to maintain a stateful connection and the token is uploaded to /authz-info once:
C RS AS
| [-- Resource Request --->] | |
| | |
| [<---- AS Request ------] | |
| Creation Hints | |
| | |
| ----- POST /token ----------------------------> |
| | |
| <---------------------------- Access Token ----- |
| + Access Information |
| ---- POST /authz-info ---> | |
| (access_token, N1) | |
| | |
| <--- 2.01 Created (N2) --- | |
However, this is not suitable for unprotected environment (e.g. for development or in a private network). Since CoAP is typically on top of UDP, it is not possible to have "sessions" so we cannot track which client uploaded which token. I'd like to avoid to discuss about usefulness of token in unprotected context.
Should the token be part of the URI (i.e. as a Uri-Query option or in the Uri-Path)? ThingsBoard is an example of service having the the token as part of the Uri-Path but I'm not sure if this is the best option. Also, would sending a binary CBOR Web Token in Uri-Query be possible without base64url encoding?
In CoAP, no attempt is made to provide user authentication without a security layer in place, nor are there provisions for sessions in the CoAP layer (which would violate the REST design).
As I understand, that was a conscious decision to guide implementers towards a RESTful design, and to avoid the transfer of security tokens over unsecured transports.
For the two use case of "during development", it is possible to introduce an own CoAP option in the experimental-use range, for example "65003 = Simulated-OSCORE-Key-ID" (where 65003 is chosen to be critical and not safe to forward). While the OSCORE stack is not operational, an ad-hoc made-up OSCORE key ID could be inserted there and then have a token POSTed using it in the ACE steps. Obviously, those constructs should neither make their way out of a test setup, and not into specifications.
Private networks are considered as untrusted ("NoSec mode") as the rest of the network in CoAP. That assumption can probably be understood better in the context of the principle of least authority, which makes setups where all devices (and processes) on a network share the same set of permissions highly unlikely.

How to send notifications using FCM after every new entry in Firebase Real-time database?

I have a group messaging app that makes use of Firebase-Real time database. I am trying devise a way that whenever a user sends a message all the group members receive notification of message.
Following is the structure of my database.
plan-129a0
|
-- plan
|
-- LVAMCUC8S0S6tuTtLjk
|
-- exp_duration: "same_day"
|
-- members
|
-- 0: member_id
|
-- 1: member_id
|
-- messages
|
-- LVAMDqIHDrTDeTUrfkM
|
-- msgTime:
|
-- name:
|
-- text: "Hello World"
|
-- name: "Plan one"
|
-- plan_admin:
|
-- timeCreated: "Wed"
user
|
-- LT46t95CKQ9dFgXv-JF
|
-- name:
|
-- number:
|
-- uid:
I am trying to send a notification every time there is a new entry in the messages node to every user that is in the members node. My question is that can Firebase Cloud Functions achieve this task?
Some of posts that I have gone through on how to implement this state that I need a token for every device in order send them notification. What is this token and how do I get this in Android for every device that registers with my app?
So it sounds like you want to send messages to all members of a group. This means you must somehow know what devices the members of the group are using the app on. Each device that the app is installed on is identified by a so-called App Instance ID (also often referred to as an FCM token), so you essentially need to map these tokens/IDs to your application's groups.
You have two options for doing this with FCM:
Use a topic to identify each group
This means that the app should subscribe to the topic for each group that the user is a member of. And then to send a message to the group, you call the FCM API to send a message to that group's topic.
In this option, FCM manages the relationship between the groups and tokens for you, and it handles the expansion of the group to the list of tokens.
Keep track of what app instances related to each group
This means that you need to get the App Instance IDs/FCM tokens for each user and store them in your own database, keeping in mind that there may be multiple, and that they may expire. Then when you want to send a message to a group, you look up all IDs/tokens for that group, and call the FCM API to send a message to a list of tokens.
In this option, you are managing the relationship between groups and tokens yourself. This allows you more flexibility, but also means you have to do more work in managing the data.

Adding Database in MariaDB on Swisscom Cloud

Is it possible to CREATE DATABASE in MariaDB on the Swisscom Cloud? I know I can do this on Amazon RDS, but I'm not sure if this is possible on the Swisscom Cloud. If not: it would be an important feature to add.
Sorry this is not possible. You receive database space on a shared Galera Cluster (MariaDB) where you are granted all privileges in your own database.
Galera Cluster for MySQL is a true Multimaster Cluster based on
synchronous replication. Galera Cluster is an easy-to-use,
high-availability solution, which provides high system uptime, no data
loss and scalability for future growth.
> SHOW GRANTS FOR CURRENT_USER();
+----------------------------------------------------------------------------------------------------------------------------------------------+
| Grants for 2RQGCnSeAmJTWYwX#% |
+----------------------------------------------------------------------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO '2RQGCnSeAmJTWYwX'#'%' IDENTIFIED BY PASSWORD '$HASH' WITH MAX_USER_CONNECTIONS 10 |
| GRANT ALL PRIVILEGES ON `CF_32FD02B6_9B18_473D_A4D8_C84E19EC6F2C`.* TO '2RQGCnSeAmJTWYwX'#'%' |
+----------------------------------------------------------------------------------------------------------------------------------------------+
2 rows in set (0.01 sec)
Here the error (Access denied) when creating a new database
MariaDB [CF_32FD02B6_9B18_473D_A4D8_C84E19EC6F2C]> create database rokfor;
ERROR 1044 (42000): Access denied for user '2RQGCnSeAmJTWYwX'#'%' to database 'rokfor'
Please subscribe to our newsletter for new feature announcements.
rokfor - would it be possible for you to share a little bit more insights, why you need to create your own database? (behind the scenes on each service creation of a mariadb service, you get your an own database.)
Thanks a lot!
#rokfor:
We have some substantial improvements planned in roadmap for MariaDB in Q3/Q4.
The possibility of adding new Schema/DB is not of them ... but we have some other features in the pipeline that I believe will eliminate the whole need for such feature. Can you, please, share with us why would you need such feature to confirm our assumptions? TIA!
Stay up to date with Application Cloud News via newsletter or our news page:
https://developer.swisscom.com/news
Michal Maczka
Product Manager Swisscom Application Cloud

Connection Active Google Cloud SQL

I have setup wordpress on Google Cloud SQL and Google Cloud.
I have only 2 minimal plugins setup Gapps engine and Batchache manager
There are no posts, no other plugins.
I have no visitors or followers.
My issue is I still have an active connection with Google Coud SQL which is charged at 0.025 per hour.
I have 1 connection which always stays on.
This connection I would like to close when not in use.
These are my full process list
+----+------+---------------+------+---------+------+
| Id | User | Host | db | Command | Time | State | Info |1
| 12 | root | 27.32.---.--- | NULL | Query | 0 | NULL | show full processl ist
| +----+------+---------------+------+---------+------+-------+------------------- ----+ 1 row in set (0.33 sec) –
WordPress on App Engine is optimized for high traffic sites, so batcache keeps Cloud SQL alive.
You can tune that. See https://wordpress.org/plugins/google-app-engine/other_notes/ or search for "optimize wordpress batcache on app engine".

Resources