Install NebulaGraph
sudo rpm -ivh nebula-graph-3.3.0.el7.x86_64.rpm
Start NebulaGraph
sudo /usr/local/nebula/scripts/nebula.service start all
[INFO] Starting nebula-metad...
[INFO] Done
[INFO] Starting nebula-graphd...
[INFO] Done
[INFO] Starting nebula-storaged...
[INFO] Done
Run the following command to check the service status of NebulaGraph.
sudo /usr/local/nebula/scripts/nebula.service status all
The returned result is the following one, there is a problem in nebula-graphd. How do I deal with this issue?
[INFO] nebula-metad(33fd35e): Running as 29020, Listening on 9559
[INFO] nebula-graphd(33fd35e): Running as 29095, Listening on 9669
[WARN] nebula-storaged after v3.0.0 will not start service until it is added to cluster.
[WARN] See Manage Storage hosts:ADD HOSTS in https://docs.nebula-graph.io/
[INFO] nebula-storaged(33fd35e): Running as 29147, Listening on 9779
Best to my knowledge, starting from NebulaGraph 3.0.0, the Meta service cannot directly read or write data in the Storage service that you add in the configuration file. The configuration file only registers the Storage service to the Meta service.
You must run the ADD HOSTS command to enable the Meta to read and write data in the Storage service.
Related
Followed the following procedure for attaching the EFS file to instances created using EB:
https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-mount-efs-volumes/#:~:text=In%20an%20Elastic%20Beanstalk%20environment,scale%20up%20to%20multiple%20instances.
But the logs of Elastic Beanstalk are showing following error:
[Instance: i-06593*****] Command failed on instance. Return code: 1 Output: (TRUNCATED)...fs ... mount -t efs -o tls fs-d9****:/ /efs Failed to resolve "fs-d9****.efs.us-east-1.amazonaws.com" - check that your file system ID is correct. See https://docs.aws.amazon.com/console/efs/mount-dns-name for more detail. ERROR: Mount command failed!. command 01_mount in .ebextensions/storage-efs-mountfilesystem.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
Just used **** in EFS ID for security.
Based on the comments.
The solution was to create new EFS filesystem, instead of using the original one.
I have a local gremlin server running:
bin/gremlin.sh
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.codehaus.groovy.vmplugin.v7.Java7$1 (file:/Users/jwan/Downloads/apache-tinkerpop-gremlin-console-3.4.4/lib/groovy-2.5.7-indy.jar) to constructor java.lang.invoke.MethodHandles$Lookup(java.lang.Class,int)
WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.vmplugin.v7.Java7$1
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: tinkerpop.server
plugin activated: tinkerpop.utilities
plugin activated: tinkerpop.tinkergraph
But I'm also trying to connect to it using python:
from gremlin_python.structure.graph import Graph
from gremlin_python.process.anonymous_traversal import traversal
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
from os import environ
graph = Graph()
>>> graph_db = traversal().withGraph(graph).withRemote(DriverRemoteConnection(f'ws://localhost:3000/gremlin','g'))
I get a connection refused error. How do I connect to this locally?
That console session shows output from Gremlin Console not Gremlin Server. They are two totally different distributions. Download the Gremlin Server distribution here and start with bin/gremlin-server.sh. Your output should look like this after it has started:
[INFO] GremlinServer
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
[INFO] GremlinServer - Configuring Gremlin Server from conf/gremlin-server-modern.yaml
[INFO] MetricManager - Configured Metrics Slf4jReporter configured with interval=180000ms and loggerName=org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics
[INFO] DefaultGraphManager - Graph [graph] was successfully configured via [conf/tinkergraph-empty.properties].
[INFO] ServerGremlinExecutor - Initialized Gremlin thread pool. Threads in pool named with pattern gremlin-*
[INFO] ServerGremlinExecutor - Initialized GremlinExecutor and preparing GremlinScriptEngines instances.
[INFO] ServerGremlinExecutor - Initialized gremlin-groovy GremlinScriptEngine and registered metrics
[INFO] ServerGremlinExecutor - A GraphTraversalSource is now bound to [g] with graphtraversalsource[tinkergraph[vertices:0 edges:0], standard]
[INFO] OpLoader - Adding the standard OpProcessor.
[INFO] OpLoader - Adding the session OpProcessor.
[INFO] OpLoader - Adding the traversal OpProcessor.
[INFO] TraversalOpProcessor - Initialized cache for TraversalOpProcessor with size 1000 and expiration time of 600000 ms
[INFO] GremlinServer - Executing start up LifeCycleHook
[INFO] Logger$info - Loading 'modern' graph data.
[INFO] GremlinServer - idleConnectionTimeout was set to 0 which resolves to 0 seconds when configuring this value - this feature will be disabled
[INFO] GremlinServer - keepAliveInterval was set to 0 which resolves to 0 seconds when configuring this value - this feature will be disabled
[WARN] AbstractChannelizer - The org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0 serialization class is deprecated.
[INFO] AbstractChannelizer - Configured application/vnd.gremlin-v3.0+gryo with org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0
[WARN] AbstractChannelizer - The org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0 serialization class is deprecated.
[INFO] AbstractChannelizer - Configured application/vnd.gremlin-v3.0+gryo-stringd with org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0
[INFO] AbstractChannelizer - Configured application/vnd.gremlin-v3.0+json with org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0
[INFO] AbstractChannelizer - Configured application/json with org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0
[INFO] AbstractChannelizer - Configured application/vnd.graphbinary-v1.0 with org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1
[INFO] AbstractChannelizer - Configured application/vnd.graphbinary-v1.0-stringd with org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1
[INFO] GremlinServer$1 - Gremlin Server configured with worker thread pool of 1, gremlin pool of 4 and boss thread pool of 1.
[INFO] GremlinServer$1 - Channel started at port 8182.
I am trying to deploy shiny server pro on a ubuntu 19.04 machine. I opted for the 45 days evaluation period where in Rstudio provides unrestricted access to shiny server pro features for the said period. But after setting up the server, I found out in the logs that my evaluation period had expired just after one day of use.
For setting up the server I followed the following steps:
Entered my details in the evaluation period form in this link https://rstudio.com/products/shiny-server-pro/evaluation/
I received a link in my mail which took me to the page which contains the commands to set up the R shiny pro server in ubuntu. This can be found here https://rstudio.com/products/shiny/download-commercial/ubuntu/
After following the steps,and changing the run_as parameter to my username in shiny-server.conf file I was able to get the server running. But found this in the logs:
[2019-10-15T14:05:28.424] [INFO] shiny-server - Shiny Server Pro v1.5.12.1023 (Node.js v10.15.3)
[2019-10-15T14:05:28.426] [INFO] shiny-server - Using config file "/etc/shiny-server/shiny-server.conf"
[2019-10-15T14:05:28.427] [INFO] shiny-server - Using non-persistent random cookie secret
[2019-10-15T14:05:28.467] [INFO] shiny-server - No authentication system configured.
[2019-10-15T14:05:28.469] [INFO] shiny-server - Starting listener on http://[::]:3838
[2019-10-15T14:05:28.478] [INFO] shiny-server - License type: traditional
[2019-10-15T14:05:28.539] [INFO] shiny-server - Licensing check succeeded.
[2019-10-15T14:05:28.541] [ERROR] shiny-server - Your evaluation has ended. Please activate!
I tried the same steps on an EC2 instance, but ran into the same problem.
for anyone facing a similar issue kindly note that I was able to resolve the above error by raising a ticket to Rstudio who acknowledged the same and were kind enough to provide me with a new trial period key
I have an Airflow installation (on Kubernetes). My setup uses DaskExecutor. I also configured remote logging to S3. However when the task is running I cannot see the log, and I get this error instead:
*** Log file does not exist: /airflow/logs/dbt/run_dbt/2018-11-01T06:00:00+00:00/3.log
*** Fetching from: http://airflow-worker-74d75ccd98-6g9h5:8793/log/dbt/run_dbt/2018-11-01T06:00:00+00:00/3.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-74d75ccd98-6g9h5', port=8793): Max retries exceeded with url: /log/dbt/run_dbt/2018-11-01T06:00:00+00:00/3.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7d0668ae80>: Failed to establish a new connection: [Errno -2] Name or service not known',))
Once the task is done, the log is shown correctly.
I believe what Airflow is doing is:
for finished tasks read logs from s3
for running tasks, connect to executor's log server endpoint and show that.
Looks like Airflow is using celery.worker_log_server_port to connect to my dask executor to fetch logs from there.
How to configure DaskExecutor to expose log server endpoint?
my configuration:
core remote_logging True
core remote_base_log_folder s3://some-s3-path
core executor DaskExecutor
dask cluster_address 127.0.0.1:8786
celery worker_log_server_port 8793
what i verified:
- verified that the log file exists and is being written to on the executor while the task is running
- called netstat -tunlp on executor container, but did not find any extra port exposed, where logs could be served from.
UPDATE
have a look at serve_logs airflow cli command - I believe it does exactly the same.
We solved the problem by simply starting a python HTTP handler on a worker.
Dockerfile:
RUN mkdir -p $AIRFLOW_HOME/serve
RUN ln -s $AIRFLOW_HOME/logs $AIRFLOW_HOME/serve/log
worker.sh (run by Docker CMD):
#!/usr/bin/env bash
cd $AIRFLOW_HOME/serve
python3 -m http.server 8793 &
cd -
dask-worker $#
I've been migrating to Firebase 3.0, and with the new changes, we have to use firebase serve on the CLI for local development, and I believe this defaults to port 5000. However, after going through the init process, running firebase serve doesn't do anything after "Starting Firebase development server..." even with specifying port 5000. Attempted fixes:
Tried with other ports, like 5001
Reinstalled Node (4.x and 6.x)
Reinstalled NPM
Removed firebase-cli (since firebase-tools is now being used)
Reinstalled firebase-tools with npm
Tweaked firebase init endlessly
Tried on different user accounts on my computer
Restarted computer
Checked that port 5000 was free by $lsof -i tcp:5000
Tested address variants like localhost:5000 and like 127.x and 192.x
Here is the debug log:
[debug] ----------------------------------------------------------------------
[debug] Command: /usr/local/bin/node /usr/local/bin/firebase serve -p 5000 --debug
[debug] CLI Version: 3.0.0
[debug] Platform: darwin
[debug] Node Version: v6.2.0
[debug] Time: Sun May 22 2016 01:29:59 GMT+0200 (CEST)
[debug] ----------------------------------------------------------------------
[debug]
[info] Starting Firebase development server...
[info]
[info] Project Directory: /Users/user/Documents/localdev/spfwork
Any thoughts on how to fix?
Thank you for your help.
Fixed - firebase serve from firebase-tools (npm) was missing a logger for some errors, which I added on a pull request here: https://github.com/firebase/firebase-tools/pull/143
My error was that localhost was not starting for some reason, so I changed the command to firebase serve -p 5000 -o 127.0.0.1, and specifying the listen port allowed the server to start successfully.
For reference, the error was Error: getaddrinfo ENOTFOUND localhost
You could just change your /etc/hosts file and use firebase serve normally.
To do this:
Launch Terminal
Type sudo nano /etc/hosts and press Return
Enter your admin password Paste
Paste
##
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
Save the file by pressing Ctrl + O
Exit with Ctrl + X
This should fix it.
See this for more
If Firebase cannot find the 'Public' folder, this error might show up. In that case, the error can be resolved by putting index.html and other static files and app assets of the website within the public folder, and executing firebase deploy at the Firebase CLI again.