Can I use the NebulaGraph Dashboard service deployed on AWS to manage a NebulaGraph database node on another cloud such as Google Cloud? - nebula-graph

I deployed a Nebula Graph database Enterprise cluster on AWS according to their doc here. It has a NebulaGraph Dashboard service that seems to be able to manage different NebulaGraph nodes.
Does anyone know if I can use Dashboard to manage my NebulaGraph database on GCP?

It's technically doable but not worth it hacking so.
Dashboard could:
do lifecycle management(including scale-in & scale-out, stop&start) towards services on hosts via SSH
do observability/monitoring things towards services/hosts via exporters(node or nebulagraph)
do NebulaGraph related ops via GraphClient
In theory, if 2. and 3. are network-wise connected, there is no blocking issue.
While for 1., apart from the network perspective, the lifecycle management(scale-out, for instance) is overlapped with the cloud infra orchestration(cloud formation/terraform etc.), before this was integrated(that dashboard is calling cloud formation stacks or terraform to provision nodes and/or service binaries), the scaling feature cannot be used from the dashboard.

Related

Where an I find performance insight metrics `db.SQL.total_query_time.avg` in CloudWatch?

There is a useful metrics from AWS RDS Performance Insight called db.SQL.total_query_time.avg (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_PerfInsights_Counters.html#USER_PerfInsights_Counters.Aurora_PostgreSQL)
I would like to setup alarm for it. However I cannot find it anywhere in Cloudwatch. Does any one know if it exists in Cloudwatch?
Amazon RDS Performance Insights metrics are not shown on AWS CloudWatch Metrics Dashboard, but you can query RDS Performance Insights metrics using APIs, you could create custom AWS Lambda function to query those metrics and trigger an alert using AWS SNS. Below you can find links to access metrics using AWS SDK and CLI APIs.
AWS CLI
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/pi.html#PI.Client.get_resource_metrics
AWS SDK
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/pi/get-resource-metrics.html

AWS CloudWatch with mobile applications

I have a backend system built in AWS and I'm utilizing CloudWatch in all of the services for logging and monitoring. I really like the ability to send structured JSON logs into CloudWatch that are consistent and provide a lot of context around the log message. Querying the logs and getting to the root of an issue is simple or just exploring the health of the environment - makes CloudWatch a must have for my backend.
Now I'm working on the frontend side of things, mobile applications using Xamarin.Forms. I know AWS has Amplify but I really wanted to stick with Xamarin.Forms as that's a skill set I've already got and I'm comfortable with. Since Amplify didn't support Xamarin.Forms I've been stuck looking at other options for logging - one of them being Microsoft's AppCenter.
If I go the AppCenter route I'll end up having to build out a mapping of the AppCenter installation identifier and my users between the AWS environment and the AppCenter environment. Before I start down that path I wanted to ask a couple questions around best practice and security of an alternative approach.
I'm considering using the AWS SDK for .Net, creating an IAM Role with a Policy that allows for X-Ray and CloudWatch PUT operations on a specific log group and then assigning it to an IAM User. I can issue access keys for the user and embed them in my apps config files. This would let me send log data right into CloudWatch from the mobile apps using something like NLog.
I noticed with AppCenter I have to provide a client secret to the app, which wouldn't be any different than providing an IAM User access key to my app for pushing into CloudWatch. I'm typically a little shy about issuing access keys from AWS but as long as the Policy is tight I can't think of any negative side-effects... other than someone flooding me with log data should they pull the key out of the app data.
An alternative route I'm exploring is instead of embedding the access keys in my config files - I could request them from my API services and hold it in-memory. Only downside to that is when the user doesn't have internet connectivity logging might be a pain (will need to look at how NLog handles sinks that aren't currently available - queueing and flushing)
Is there anything else I'm not considering or is this approach a feasible solution with minimal risk?

WSO2 API manager multi-tenant distributed setup

I installed and configured a distributed setup of WSO2 API Managager with multitenancy enabled. I have 2 distributed gateways and i followed this guide https://docs.wso2.com/display/AM260/Distributed+Deployment+of+the+Gateway . I created one tenant (we will call it tenantA) and deployed some APIs under it.
The problem is that with multitenancy the Synapse APIs artifacts (for APIs created in tenants [and not in the super-tenant]) on the gateway are stored in APIM-HOME/repository/tenants/tenantA/synapse-configs/default/api and not under APIM-HOME/repository/deployments/server/.
The question is: Should I share both paths (NFS/glusterFS) between the gateways? If not, which one should I share?
How about the registry? I shared both the config and governance registry partitions, is it supposed to be like this?
Many thanks
In the multi-tenancy use case, those API artifacts are getting created at repository/tenants location. You can find those locations in https://docs.wso2.com/display/AM260/Common+Runtime+and+Configuration+Artifacts
Yes, you have to share both paths as token, revoke, etc. APIs exist in super tenant location.
You have to share the userdb and registry db for the GWs in the multi-tenancy use case. https://docs.wso2.com/display/AM210/Understanding+the+Distributed+Deployment+of+WSO2+API-M

Storing Graph Databases in Google Cloud

I have a dataset of 6 million entries. Each entry has one-to-many relationship with other entries. Previously this data has been stored in a Neo4J instance.
Does Google Cloud provide a product that can store Graph Databases? Or is there a way to adapt an existing Google Cloud Database product to work as a graph databases engine?
I am trying to avoid running a Neo4J instance on a Google compute instance.
JanusGraph is an open source graph database solution which can use Google Cloud Bigtable as a storage backend; here's a guide for deploying JanusGraph with Cloud Bigtable on GCP.
Some of the folks from Google even help maintain the project. So that might be close to what you looking for.

How to connect a database server running on local machine as a service to web application hosted on pivotal cloud foundry?

I am trying to test run a basic .NET web application on pivotal cloud foundry. This web application uses as its database a MongoDB server hosted on my local machine. At the moment I am limited to use of the cloud infrastructure by using just the Apps Manager.
I have read the pivotal cloud foundry docs about user provided services, but cannot figure out as to how the connection is to be really made. I have already come across various other ways like using MongoDB as a service (beta version), but at the moment I am not allowed access to the Operations Manager. Looking for an explanation on user provided services or how to implement the service broker API, specifically.
I am new to Mongo as well, so any suggestion regarding making a connection through tweaking Mongo may help as well. Thanks
The use case you describe (web app in PCF connecting to a resource in your local machine) is not recommended.
You can create a MongoDB instance for development purposes in PCF.
$ cf marketplace
...
mlab sandbox Fully managed MongoDB-as-a-Service
...
You can create a mlab service and bind it to your application. You will then have a MongoDB instance in PCF that you can use for development purposes.
Edit:
In that case a user provided service might help you, where you pass in your remote MongoDB instance configuration that you can read in your application. e.g.:
cf.exe cups my-mongodb -p '{"key1":"value1","key2":"value2"}'
You can add your local mongo-db as a CUPS service to your PCF Dev.
Check out the following post.
How to create a CUPS service for mongoDB?

Resources