Monitoring NebulaGraph K8s resources - nebula-graph

We want to extend our existing monitoring to include Nebula resources such as node, service and space.
How can we do this, can you please point me to any documentation

For node, I take it as the OS/Machine level, the vanilla node exporter(and Prometheus)[0] will do the job.
For Service/Space level monitoring, we could leverage the nebula-stats-operator[1]
And actually, there is an all-in-one solution out there in the NebulaGraph community called NebulaGraph Dashboard[2], in which it wired everything together already, even if you are connecting things from scratch on your own, you could still refer to it(on how it leveraged the exporters).
For documentation, you could check the following chapters:
dashboard https://docs.nebula-graph.io/3.3.0/nebula-dashboard/1.what-is-dashboard/
metrics https://docs.nebula-graph.io/3.3.0/6.monitor-and-metrics/1.query-performance-metrics/
ref:
[0] https://github.com/prometheus/node_exporter
[1] https://github.com/vesoft-inc/nebula-stats-exporter
[2] https://github.com/vesoft-inc/nebula-dashboard

Related

Airflow Metadata DB = airflow_db?

I have a project requirement to back-up Airflow Metadata DB to some data warehouse (but not using an Airflow DAG). At the same time, the requirement mentions some connection called airflow_db.
I am quite new to Airflow, so I googled a bit on the topic. I am a bit confused about this part. Our Airflow Metadata DB is PostgreSQL (this is built from docker-compose, so I am tinkering on a local install), but when I look at Connections in Airflow Web UI, it says airflow_db is MySQL.
I initially assumed that they are the same, but by the looks of it, they aren't? Can someone explain the difference and what they are for?
Airflow creates airflow_db Conn Id with MySQL by default (see source code)
Default connections are not really useful in production system. It's just a long list of stuff that you are probably not going to use.
Airflow 1.1.10 introduced the ability not to create the default list by setting:
load_default_connections = False in airflow.cfg (See PR)
To give more background the connection list is where hooks find the information needed in order to connect to a service. It's not related to the backend database. Though the backend is db like any db and if you wish to allow hooks to interact with it you can define it in the list like any other connection (which is probably why you have this as option in the default).

OKD 4.5 - How to upgrade cluster in restricted network

I want to upgrade OKD cluster from 4.5.0-0.okd-2020-10-03-012432 to 4.5.0-0.okd-2020-10-15-235428
version in restricted network.
I could not find any steps on OKD documentation site. However, steps are present on OCP documentation site and looks straight forward.
Queries:
Is this scenario supported in OKD?
In below document at step #7, what could be corresponding step for OKD.
https://docs.openshift.com/container-platform/4.5/updating/updating-restricted-network-cluster.html#update-configuring-image-signature
Where can I get image signature for OKD? Is this step valid for OKD?
I figured it out.
I did not perform steps mentioned in https://docs.openshift.com/container-platform/4.5/updating/updating-restricted-network-cluster.html#update-configuring-image-signature
"--apply-release-image-signature" flag in "oc adm release mirror..." command creates configmap automatically.

Provision 2 node-type Service Fabric ARM

I've been trying to provision a 2-node-type service fabric cluster using ARM. The secondary node type (backend) should not be exposed to the internet. For that I've created a loadbalancer with an internal IP-Address.
Everything gets provisioned correctly but I cannot get the nodes added to the cluster. From the Azure portal when I open the cluster it says it has no nodes in it even though it has the node types configured.
I have even tried downloading the template produced by the azure portal after creating a service fabric cluster. I have also executed one of the templates provided on github and I cannot still see any nodes in the cluster.
Any suggestion what I could be missing?
Thanks
Glad to hear you got that sorted. Regarding your follow-up question on deploying to the backend node-types, that's where you'd use placement constraints. When you create clusters in Azure through ARM, it automatically sets up a placement property on each node using the node type name you defined. So on your back-end nodes, assuming your node type is called "backendnode" you'll have the following placement policy defined:
NodeTypeName: backendnode
When you deploy your services, just use that as your placement constraint:
New-ServiceFabricService -ApplicationName "fabric:/myapp" -ServiceName "fabric:/myapp/myservice" -ServiceTypeName "myservicetype" -Stateful -MinReplicaSetSize 2 -TargetReplicaSetSize 3 -PartitionSchemeSingleton -PlacementConstraint "NodeTypeName == backendnode"

How to migrate Wordpress between Compute Engine instances

I have recently created a very small Google Compute Engine instance, naively thinking it's one of those easily scalable things Google people keep raving about.
I used the quick deployment feature of Wordpress and it all installed itself nicely, so I started configuring and adding data etc.
However, I then found out that I can't scale an existing instance (i.e. it won't allow me to change the instance type to a bigger one. I don't get why not, but there you go.), so it looks like I need to find a way to migrate my Wordpress installation to a new instance.
Will I simply be able to create a new instance and point it at the persistent disk my small instance currently uses, et voila, Bob's your uncle?
Or do I need to manually get the files and MySql data off the first instance and re-import into an empty new instance?
What's the easiest way?
Any advise or helpful links would be appreciated.
Thanks.
P.S.: Btw, should I try to use the Google Cloud SQL store instead of a local MySql installation?
In order to upgrade your VM:
access the VM's settings in the Developers Console (your project -> Compute -> Compute Engine -> VM instances -> click on the VM's name)
Scroll down to the "Disks" section, and un-check "Delete boot disk when instance is deleted"
Delete the VM in question. Take note that the disk, named after the instance, will remain.
Create a new VM, selecting "Existing disk" under Boot disk - Boot source. In the next box down, select the disk from point 3 above, as well as a bigger machine type.
The resulting new instance will use the existing disk from the old one, with improved hardware / performance.
As for using Cloud SQL in lieu of a VM-installed database, it's perfectly feasible, and allows to adjust the Cloud SQL instance to match your actual use. A few consideration when setting up this kind of instance:
limit the IPs allowed to connect to your Cloud SQL instance to your frontend's IP, and perhaps the workstation's IP or subnet from which you maintain the database out of.
configure Cloud SQL to use SSL certificates.
Sammy's answer covers the important stuff I just wanted to clarify how your files are arranged on the two disks that are attached to your instance:
The data disk contains /var/www/ which is all of the wordpress files. It's mounted on the instance at /wordpress
The boot disk contains everything else, including the MySQL database that was created for the Wordpress installation.

Oracle coherence with weblogic server?

Hi i am new to oracle coherence,
Question 1 : my scenario is, i have to implement the oracle coherence replicated cache in my webapplication.(with weblogic server).the coherence should be the part of the weblogic server means when i start the weblogic server the coherence should start
(both should run in the single JVM).please help me how to do it ?
Question 2 : whether i need a database to maintain the records or oracle coherence it self maintain in file system ? if yes means how and what will happen for the cached data when i shutdown the server?
Q1:
I would describe it in couple of steps:
Place coherence.jar in the classpath. Depending on specific case it can be WLS classpath or application's classpath. Unless you want to share coherence node between many applications it is often a better idea to put it to application's classpath. It also has other advantages like easier maintenance.
Prepare your own cache configuration for the replicated topology. You can skip this step if you want to use coherence default cache configuration coherence-cache-config.xml which includes replicated topology, but keep in mind that your cache name must start with repl- and this is in general not recommended for production. Otherwise put the following to your custom-cache-config.xml file and add it to your application's classpath.
<?xml version="1.0"?>
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config
coherence-cache-config.xsd">
<caching-scheme-mapping>
<cache-mapping>
<cache-name>my-repl-cache</cache-name>
<scheme-name>replicated</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<replicated-scheme>
<scheme-name>replicated</scheme-name>
<backing-map-scheme>
<local-scheme/>
</backing-map-scheme>
<autostart>true</autostart>
</replicated-scheme>
</caching-schemes>
</cache-config>
Create a ContextListener for your application and place the following code into contextInitialized method:
// join existing cluster or form a new one
CacheFactory.ensureCluster();
Start your WLS with the following option:
-Dtangosol.coherence.cacheconfig=custom-cache-config.xml
Deploy and start your application (possibly on many servers)
Q2:
In general coherence is in memory solution and doesn't persist data by default. If you need to manage data in persistent store you can look into CacheStore interface. This is described here in the documentation.
Keep in mind that often you have more than one coherence node in the cluster so you will not lose your data when you shutdown one of them because data is always stored also in other JVM(s). When you restart your node it will join the cluster and your data will be there.
Starting with WebLogic 12.1.2, there is excellent Coherence integration via the "Coherence Containers" functionality of WebLogic, in addition to the ActiveCache feature of WebLogic. Here is a URL for the container feature: http://docs.oracle.com/middleware/1212/wls/WLCOH/deploy-wls-coherence.htm
For the sake of full disclosure, I work at Oracle. The opinions and views expressed in this post are my own, and do not necessarily reflect the opinions or views of my employer.

Resources