I have setup wordpress on Google Cloud SQL and Google Cloud.
I have only 2 minimal plugins setup Gapps engine and Batchache manager
There are no posts, no other plugins.
I have no visitors or followers.
My issue is I still have an active connection with Google Coud SQL which is charged at 0.025 per hour.
I have 1 connection which always stays on.
This connection I would like to close when not in use.
These are my full process list
+----+------+---------------+------+---------+------+
| Id | User | Host | db | Command | Time | State | Info |1
| 12 | root | 27.32.---.--- | NULL | Query | 0 | NULL | show full processl ist
| +----+------+---------------+------+---------+------+-------+------------------- ----+ 1 row in set (0.33 sec) –
WordPress on App Engine is optimized for high traffic sites, so batcache keeps Cloud SQL alive.
You can tune that. See https://wordpress.org/plugins/google-app-engine/other_notes/ or search for "optimize wordpress batcache on app engine".
Related
I use NebulaGraph to synchronize date between clusters.
In the master cluster, I cannot use the DBA user to perform the sign in drainer service command.
(uuu_dba#nebula) [basketballplayer]> sign in drainer service(192.168.15.2:9889)
[ERROR (-1008)]: PermissionError: No permission to write space.
Tue, 03 Jan 2023 10:26:33 CST
(uuu_dba#nebula) [basketballplayer]> show roles in basketballplayer
+-----------+-----------+
| Account | Role Type |
+-----------+-----------+
| "uuu_dba" | "DBA" |
+-----------+-----------+
Can I use the DBA user to register the drainer service in Nebula Graph?
I want to design my application which requires writing asynchronously through pub/sub and reading synchronously from the database. How do you implement this in clean architecture? There's a separate process reading from the pub/sub and writing to the database.
I'm thinking of a design like this; however, I wasn't sure if it's a good design for one repository to be implemented using separate persistent infrastructure for reading and writing.
Repository | Controller | Infrastructure
my repository ---> my controller ---> (write) pub/sub
\-> (read) database
An alternative could be CQRS, but in my case, I use the same model for reading and writing, so I think it's better to use a single model.
Background:
Writes to my application are very elastic, while reads are consistent. So I avoid getting my service overloaded by writing asynchronously.
Thanks!
It the same as with any other clean architecture application.
+------------+ || +----------+ +------------+
| Controller | ---> | Use Case | ---> | Repository |
+------------+ || +----------- +------------+
|| | save() |
|| | find() |
|| +------------+
|| ^
============================================|==========
|
+------------+
| RepoImpl |
+------------+
| save() | ---> write to pub/sub
| find() | ---> read from db
+------------+
Maybe you split up the repository writes and reads into different interfaces, because you might have a use case that only reads and another that only writes.
I usually apply the interface segregation principle and define use case specific interfaces.
There is an app I'm building using Firebase Realtime Database. Users of this app will be able to create a post and edit it anytime they want. Also, they're allowed to access all versions of the post, but they'll get the latest version by default. I've got this so far:
Realtime Database-root
|
--- posts
|
--- postId
|
--- versionId
| |
| --- uid
| |
| --- title
| |
| --- date
|
--- versionId
|
--- uid
|
--- title
|
--- date
However, I'm afraid that this is the best way to go. Could this approach be improved considering cost, scalability, security and performance?
Since the RTDB costs $5 per GB stored I would recommend that you use Google Storage as a cheap datastore to store your versions.
What that means is your database should store posts like this:
Realtime Database-root
|
--- posts
|
--- postId
|
--- uid
|
--- title
|
--- date
--- postsVersions
|
--- postId
|
--- version1Id: true
|
--- version2Id: true
version1Id and version2Id reference JSON documents stored in Google Storage.
Workflow
User creates the initial post and the database entry is made.
User makes changes and saves them to the RTDB, overwriting the previous version in /posts.
A Cloud Function that is triggered on the onUpdate() event on /posts/{postId} takes the previous version and saves it to Google Storage as https://storage.google.com/<bucket>/<postId>/<version1Id>.json and saves that versionId to /postsVersions.
If the user deletes the version1Id key from that /postsVersions, a Cloud Function monitoring the onDelete() event on /postsVersions/{postId} should delete that JSON file from Google Storage.
Finally if the user would like to load a previous version of a post, they make a simple HTTPS request to retrieve the JSON file from Google Storage and then they update /posts with the previous version.
Security
You should make sure that your version IDs are unguessable and non-sequential and you can then simply provide blanket access to the bucket using IAM permissions for all authenticated users. For more granular control where users can't potentially access other users' versioned posts, a non-public bucket and an API endpoint that generates a signedURL would be simple to implement.
Is there a consensus on how to send tokens as part of CoAP requests?
HTTP clients share their identity/credentials through the Authorization header (or eventually the Cookie header) or less often as part of the URL (e.g. a JSON Web Token encoded in base64url). In the CoAP RFC, there is not equivalent option to Authorization.
Draft profiles for authentication and authorization for constrained environments (e.g. OSCORE or DTLS) rely on the underlying security protocol to maintain a stateful connection and the token is uploaded to /authz-info once:
C RS AS
| [-- Resource Request --->] | |
| | |
| [<---- AS Request ------] | |
| Creation Hints | |
| | |
| ----- POST /token ----------------------------> |
| | |
| <---------------------------- Access Token ----- |
| + Access Information |
| ---- POST /authz-info ---> | |
| (access_token, N1) | |
| | |
| <--- 2.01 Created (N2) --- | |
However, this is not suitable for unprotected environment (e.g. for development or in a private network). Since CoAP is typically on top of UDP, it is not possible to have "sessions" so we cannot track which client uploaded which token. I'd like to avoid to discuss about usefulness of token in unprotected context.
Should the token be part of the URI (i.e. as a Uri-Query option or in the Uri-Path)? ThingsBoard is an example of service having the the token as part of the Uri-Path but I'm not sure if this is the best option. Also, would sending a binary CBOR Web Token in Uri-Query be possible without base64url encoding?
In CoAP, no attempt is made to provide user authentication without a security layer in place, nor are there provisions for sessions in the CoAP layer (which would violate the REST design).
As I understand, that was a conscious decision to guide implementers towards a RESTful design, and to avoid the transfer of security tokens over unsecured transports.
For the two use case of "during development", it is possible to introduce an own CoAP option in the experimental-use range, for example "65003 = Simulated-OSCORE-Key-ID" (where 65003 is chosen to be critical and not safe to forward). While the OSCORE stack is not operational, an ad-hoc made-up OSCORE key ID could be inserted there and then have a token POSTed using it in the ACE steps. Obviously, those constructs should neither make their way out of a test setup, and not into specifications.
Private networks are considered as untrusted ("NoSec mode") as the rest of the network in CoAP. That assumption can probably be understood better in the context of the principle of least authority, which makes setups where all devices (and processes) on a network share the same set of permissions highly unlikely.
I'm recording Selenium tests using Selenium IDE to test the registration flow of my drupal site, which depends heavily on the rules module.
Some of the tests involve the registration of user accounts. Since I will be using these tests on multiple servers with different amounts of users, I do not know upon starting the test which user ID to check for. Since the user ID is in the URL, I was hoping to grab it and store it in Selenium.
Upon logging in, users are redirected to a URL like http://192.168.100.100:8888/en/user/6, where "6" is the UID.
I imagine that I could use Selenium's storeValue command to do this, but what should I put as the target to pull the user ID out of the URL?
store | http://192.168.100.100:8888/en/user/6 | string
store | 1 | delimiter
store | javascript{storedVars['string'].split('user/')[storedVars['delimiter']]} | result
echo | ${result}
Or
storeLocation | string
store | 1 | delimiter
store | javascript{storedVars['string'].split('user/')[storedVars['delimiter']]} | result
echo | ${result}
Result will be 6
You could do this by grabbing the url and parsing the string for the user ID if you are using RC or Selenium 2.
For specific help you'll have to shed some light on what version of Selenium you are using (IDE, Selenium RC, Selenium 2/Webdriver) and what language?
If you have a sample URL string you can get a more exact answer as well.