How to operate the meta in NebulaGraph Dashboard? - nebula-graph

I am using NebulaGraph Dashboard. Why can't I select the meta service when scaling?

The Metad service stores the metadata of the NebulaGraph database. Once the Metad service fails to function, the entire cluster may break down. Besides, the amount of data processed by the Metad service is not much, so it is not recommended to scale the Metad service. And we directly disabled operating on the Metad service in Dashboard to prevent the cluster from being unavailable due to the misoperation of users.

Related

ETA of Cosmos DB Autopilot being available for existing database/ARM template

When will autopilot available for existing database and containers?
When will autopilot available to be configured by ARM template?
Is there ETA for these features? We need to decide whether to wait for the official autopilot or implement it by ourselves if it takes long time before GA.
ARM support for AutoPilot, as well as migration capability for existing database and container resources will be available when this feature goes GA.
We are not disclosing the exact date for GA but it will be fairly soon.
Thanks.
This is now GA, the ARM template below works for me.
Create an Azure Cosmos DB account for Core (SQL) API with autoscale
I still get 'Error updating offer for resource dbs/... {"code":500,"body":{"code":"InternalServerError","message":"Message: {\"Errors\":[\"An unknown error occurred while processing this request. If the issue persists, please contact Azure Support:
Existing container, UK South

Couple of Queries on Google Cloud SQL

I'm setting up an application which is a set of Mircroservices consuming a Cloud SQL DB in GCP. My queries are -
I want to set up HA for Cloud SQL in across regions(primary region and a secondary region with active replication enabled). I do not see any out of the box set up from Google Cloud to achieve the same. Out of the box HA for Cloud SQL 2nd Gen is to have a HA instance in the same region in another zone in the same region. Please provide the best practice to achieve the same.
All the microservices should be using private ip to do actions on this MySQL. How do set this up?
Is there any native support from MySQL to enable Active replication to another region?
Is it possible to set up manual backup as per customer requirements? I do understand automatic backup available.To meet RPO RTO requirements want to customize db backup frequency - is that possible?
I want to set up HA for Cloud SQL in across regions(primary region and a secondary region with active replication enabled)
You can use the external master feature to replicate to an instance in another zone.
All the microservices should be using private ip to do actions on this MySQL. How do set this up?
Instructions for Private IP setup are here. In short, your services will need to be on the same VPC as the Cloud SQL instances.
Is it possible to set up manual backup as per customer requirements?
You can configure backups using the SQL Admin API.
Please, let me list your questions along with their response:
I want to set up HA for Cloud SQL in across regions(primary region and a secondary region with active replication enabled). I do not see any out of the box set up from Google Cloud to achieve the same. Out of the box HA for Cloud SQL 2nd Gen is to have a HA instance in the same region in another zone in the same region. Please provide the best practice to achieve the same.
-According to the documentation [1], the configuration is made up of a primary instance (master) in the primary zone and a failover replica in the secondary zone, at the moment is not possible the HA for Cloud SQL across regions.
All the microservices should be using private ip to do actions on this MySQL. How do set this up?
-You can set up a cloud SQL instance to use private IP, please review the next information, you may find it helpful [2].
Is there any native support from MySQL to enable Active replication to another region?
-I would recommend to get in contact with mysql support [3], so that you get the help you need, in the meantime you could review the next link [4], and see if this fits your needs.
Is it possible to set up manual backup as per customer requirements? I do understand automatic backup available.To meet RPO RTO requirements want to customize db backup frequency - is that possible?
-You can create a backing up on demand, please review the next link [5] which helps to illustrate how to set this kind of backups.
Please let me know if this information helps to address your questions.
[1] https://cloud.google.com/sql/docs/mysql/high-availability
[2] https://cloud.google.com/sql/docs/mysql/private-ip
[3] https://www.mysql.com/support/
[4] https://dev.mysql.com/doc/mysql-cluster-excerpt/5.6/en/mysql-cluster-replication-conflict-resolution.html
[5] https://cloud.google.com/sql/docs/mysql/backup-recovery/backing-up#on-demand

Storing and Retrieving Published APIs in WSO2 AM

I have a docker instance of wso2-am running with the published API which are working fine. However, when the docker instance is shutdown and started up again the published APIs together with the configurations are lost.
How can I persist the published API, map and display it accordingly when the wso2-am docker instance is once started?
This is the basic issue with docker where once the container is stopped all of its data is also lost with it.
In order, to save the data I had to use the docker commit command to save the previous working state.
APIM related data is stored in the database(api related metadata) and filesystem(synapse apis and throttling policies, etc). By default APIM uses H2 database. To persist the data, you will have to point this to a RDBMS (mysql, oracle, etc). See https://docs.wso2.com/display/AM260/Changing+the+Default+API-M+Databases
To persist API related artifact (synapse files, etc), You have to preserve the content in the repository/deployement/server location. For this you could use a NFS mounting.
Also please refer this https://docs.wso2.com/display/AM260/Deploying+API+Manager+using+Single+Node+Instances on information about doing a single node deployment

WSO2 API Manager as 2 instance all-in-one setup

I have recently deployed WSO2 API Manager (2.0.0) as 2 instance all-in-one clustered (using Hazelcast AWS scheme) with mysql datasource as specified in this link
Since, not able to find a complete step by step installation guide for this setup. I would like to clarify few areas that I am not too sure of.
Depsync via SVN - since this will be manger to manger nodes (instead of manager to worker nodes) both will have <AutoCommit>true</AutoCommit>. Should we have any concern on this?
DAS - Having DAS as separate node, should both WSO2AM and WSO2DAS share the same WSO2AM_STATS_DB database?
Publisher - Can we use both publishers (i.e one at a time). Noticed once we published an API, it takes time for other publisher to sync the state to published (even if the new API appears almost immediate on other publisher as created)
Thank you.
1) If you enable <AutoCommit>true</AutoCommit> in both nodes, it can cause svn conflicts if there are parallel publishing from 2 nodes. Instead, you can publish to multiple gateways from the publisher. For that, you can configure multiple environments in <Environments> section in api-manager.xml
2) Yes, DAS writes summarized data to that DB, and APIM dashboards read data from the same DB.
3) All publisher/store nodes should be in the same cluster. Then only they can communicate about API state changes etc. To be on the same cluster, all these nodes should have the same clustering domain. You can configure that in clustering section of axis2.xml.

Is there a direct way to query and update App data from within a proxy or do I have to use the management API?

I have a need to change Attributes of an App and I understand I can do it with management server API calls.
The two issues with using the management server APIs are:
performance: it’s making calls to the management server, when it
might be possible directly in the message processor. Performance
issues can probably be mitigated with caching.
availability: having to use management server APIs means that the system is
dependent on the management server being available. While if it were
done directly in the proxy itself, it would reduce the number of
failure points.
Any recommended alternatives?
Finally all entities are stored in the cassandra ( for the runtime )
Your best choice is using access entity policy for getting any info about an entity. That would not hit the MS. But just for your information - most of the time you do not even need an access entity policy. When you use a validate apikey or validate access token policy - all the related entity details are made available as flow variable by the MP. So no additional access entity calls should be required.
When you are updating any entity (like developer, application) - I really assume it is management type use case and not a runtime use case. Hence using management APIs should be fine.
If your use case requires a runtime API call to in-turn update an attribute in the application then possibly that attribute should not be part of the application. Think how you can take it out to a cache, KVM or some other place where you can access it from MP (Just a thought without completely knowing the use cases ).
The design of the system is that all entity editing goes through the Management Server, which in turn is responsible for performing the edits in a performant and scalable way. The Management Server is also responsible for knowing which message processors need to be informed of the changes via zookeeper registration. This also ensures that if a given Message Processor is unavailable because it, for example, is being upgraded, it will get the updates whenever it becomes available. The Management Server is the source of truth.
In the case of Developer App Attributes, (or really any App meta-data) the values are cached for 3 minutes (I think), so that the Message Processor may not see the new values for up to 3 minutes.
As far as availability, the Management Server is designed to be highly available, relying on the same underlying architecture as the message processor design.

Resources