Nebula-br backup data to local error: E_LIST_CLUSTER_NO_AGENT_FAILURE - nebula-graph

Ask NebulaGraph Database: Nebula-BR backup data to local reports an error!
The cluster has three machines, all they all have deployed metad, graphd and storaged services respectively.
All three machines have deployed agents, all with actual ip addresses, not
127.0.0.1
. /agent --agent="10.128.22.109:8888" --meta="10.128.22.109:9559"

Related

Connect webapp running in docker (windows) container to SQL Server running on AWS private subnet

Environment
ASPNET MVC App running on docker
Docker image: microsoft/aspnet:4.7.2-windowsservercore-1803 running on Docker-for-Windows on Win10Ent host
SQL Server running on AWS EC2 in a private subnet
VPN Connection to subnet
Background
The application is able to connect to database when VPN is activated and everything works fine. However when app runs on docker, the underlying connection to database is refused. Since the database is in a private subnet, VPN is needed to connect.
I am able to ping the database server as well as the general internet successfully from the command prompt launched inside the container, thus underlying networking is working fine.
Configuration
Dockerfile
FROM microsoft/aspnet:4.7.2-windowsservercore-1803
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
Docker Compose
version: '3.4'
services:
myWebApp:
image: ${DOCKER_REGISTRY}myWebApp
build:
context: .
dockerfile: Dockerfile
The network entry is removed as NAT is mapped to Ethernet and I am running on WiFi thus having it disabled.
SQL Connection string (default instance on def port)
"Data Source=192.168.1.100;Initial Catalog=Admin;Persist Security Info=True;User ID=admin;Password=WVU8PLDR" providerName="System.Data.SqlClient"
Local network configuration
Ping status
Let me know what needs to be fixed. Any environment or configuration-specific information can be provided
After multiple iterations of different ways to address this issue, we finally figured out the solution which we incorporated in our production environment.
The SQL Server primary instance was in a private subnet, hence it cannot be accessed from any application outside the subnet. The SQL Enterprise manager and other apps living on local machines are able to access it via VPN as the OS tunnels that traffic to the private network. However, since docker cannot join the VPN network easily (would be too complicated, may not be actually impossible), we need to figure out a possible solution.
For this, we set up a Reverse Proxy on the private subnet, which lives on the public IP, hence accessible via the public Internet. This server has permission granted in the underlying security group setting to talk to the SQL Server (port 1433 being opened to the private IP).
So the application running on docker calls the IP of the Reverse Proxy, which in turn routes it to the SQL Server. There's a cost of one additional hop involved here, but that's something we gotta live with.
Let me know if anyone can figure out a better design. Thanks

Please Example Kubernetes External Address vs Internal Addresses

In a vmware environment, should the external address become populated with the VM's (or hosts) ip address?
I have three clusters, and have found that only those using a "cloud provider" have external addresses when I run kubectl get nodes -o wide. It is my understanding that the "cloud provider" plugin (GCP, AWS, Vmware, etc) is what assigns the public ip address to the node.
KOPS deployed to GCP = external address is the real public IP addresses of the nodes.
Kubeadm deployed to vwmare, using vmware cloud provider = external address is the same as the internal address (a private range).
Kubeadm deployed, NO cloud provider = no external ip.
I ask because I have a tool that scrapes /api/v1/nodes and then interacts with each host that is finds, using the "external ip". This only works with my first two clusters.
My tool runs on the local network of the clusters, should it be targeting the "internal ip" instead? In other words, is the internal ip ALWAYS the IP address of the VM or physical host (when installed on bare metal).
Thank you
Baremetal will not have an "extrenal-IP" for the nodes and the "internal-ip" will be the IP address of the nodes. You are running your command from inside the same network for your local cluster so you should be able to use this internal IP address to access the nodes as required.
When using k8s on baremetal the external IP and loadbalancer functions don't natively exist. If you want to expose an "External IP", quotes because most cases it would still be a 10.X.X.X address, from your baremetal cluster you would need to install something like MetalLB.
https://github.com/google/metallb

How to map VM to internal network?

I have a Dell's Machine with High on Resources like (32 GB RAM, 24 cores of CPU and 5 TB of Disk Space).
I have installed Openstack(devstack) on this Machine which has Ubuntu installed on it and has IP address 10.10.1.3.
This machine is in our local network , means i can ssh directly to this big machine from my laptop if i am in same network.
Now i have created a virtual machine instance using openstack and it has Ubuntu on it and it has IP address 10.10.0.3.
Now i want to access this virtual machine directly from my laptop like i access the big machine.
Any solution for this?
If your vm (let's call it "instance") is on a internal network (tenant/project network) what you need is a FIP (floating IP) from your external network so you can assign that FIP to your openstack instance. Also ensure your security groups allow ssh to your vm !.
I have some questions here so I can help you a better way:
Do you have an external network already created (flat or vlan based) ?.
The vm is using a tenant/project internal (gre/vxlan) network ?
Did you create a router in your tenant, which is using the external network for external access ?.
The aforementioned router is already connected to your internal network ?.

Can't access port 7474 across Rackspace cloud servers

I've setup neo4j on port 7474 on a Rackspace cloud server. I want to access this server from another Rackspace cloud server (appserver) but the connection is refused.
I've tried enabling access for the appserver to port 7474 on the neo4j server using ufw:
sudo ufw allow from 22.234.298.297 to any port 7474
I can see this rule when I run 'ufw staus' but it doesn't seem to make any difference when I try to connect to the appserver. I can ssh between these two servers.
How do I open port 7474 between cloud servers on Rackspace?
(my apologies for this very basic question but rackspace support are not helping and I cant find rackspace specific information on this)
Glad, we could solve the problem (see comments on the question).
It so happens that Neo4j accepts only connections from localhost per default. When trying to gain access to Neo4j via REST API from an app server within the same network, one has to configure the Neo4j server to open up.
The neo4j-server.properties configuration file has a configuration key with org.neo4j.server.webserver.address. You have a couple of options here.
Grant app servers in the same local network to consume the Neo4j REST API
Grant everybody access and let the firewall handle it
For the first case, use the local ip address of the machine where Neo4j is running. Let's say your machines are connected via a private class C network. The machine with Neo4j has an ip 192.168.1.4 - that's the ip you want to enter as the value in org.neo4j.server.webserver.address, so your app server running in the same network with maybe an ip of 192.168.1.5 can make network requests that are being answered by the Neo4j web server.
For the second case, you enter 0.0.0.0 as value for org.neo4j.server.webserver.address to denote that you want to accept connections on all available ip addresses on that machine. In that case you want to set up your firewall to handle permissions who can talk to the server and who doesn't - even with authentication enabled.
Extra
In a production environment that requires high availability, one can use Neo4j's enterprise edition with a high availability cluster in a master-slave setting. I've used in with one master and two slaves. I configured the Neo4j servers that they can only be accessed from the proxy server that routes writing cypher queries to the master, and reading queries to the slaves. The proxy itself had a hardware firewall on it to ensure only specific app servers within the network have access to the Neo4j database.

Creating a fake internal network for virtual machines

I have a production application that consists of
An app server VM1 - 192.168.0.4
A database server DB - 192.168.0.5
VM1 connects to the DB using its IP address.
I want to mimic production VM1 on my development machine. So it should connect to the db using the same IP as on production, but reach my development machine DB instance.
Ideally, I would not have to hardcode my development machine IP to make this work. Any ideas?
1) It's a bad idea to mimic production IP addresses on test machines
2) In answer to your question, just use a configuration file to describe the IP address of the system to connect to
Try rinetd, you can set up a redirect from the production IP address to your own.

Resources