error when starting NEO4j community instance with offline graph.db - graph

My graph has approximately 27 million relations and 15 million nodes.
When I try to start neo4j instance with the graph.db database that was created using neo4j import tool, I get the following error:
sudo service neo4j status
● neo4j.service - Neo4j Graph Database
Loaded: loaded (/lib/systemd/system/neo4j.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Mai 29 17:14:55 ammer-XPS-15-9550 neo4j[21230]: at
org.neo4j.kernel.internal.StoreLocker.checkLock(StoreLocker.java:75)
Mai 29 17:14:55 ammer-XPS-15-9550 neo4j[21230]: ... 13 more
Mai 29 17:14:55 ammer-XPS-15-9550 neo4j[21230]: 2017-05-29
15:14:55.089+0000 INFO Neo4j Server shutdown initiated by request
Mai 29 17:14:55 ammer-XPS-15-9550 systemd[1]: neo4j.service: Main
exited, code=exited, status=1/FAILURE
Mai 29 17:14:55 ammer-XPS-15-9550 systemd[1]: neo4j.service: Unit entered failed state.
Mai 29 17:14:55 ammer-XPS-15-9550 systemd[1]: neo4j.service: Failed with result 'exit-code'.
Mai 29 17:14:55 ammer-XPS-15-9550 systemd[1]: neo4j.service: Service hold-off time over, scheduling restart.
Mai 29 17:14:55 ammer-XPS-15-9550 systemd[1]: Stopped Neo4j Graph Database.
Mai 29 17:14:55 ammer-XPS-15-9550 systemd[1]: neo4j.service: Start request repeated too quickly.
Mai 29 17:14:55 ammer-XPS-15-9550 systemd[1]: Failed to start Neo4j Graph Database.

Try uncommenting this line in neo4j.conf, as I think neo4j-import-tool was not yet upgraded to support 3.2.x version.
# Enable this to be able to upgrade a store from an older version.
#dbms.allow_format_migration=true

Related

mariadb.service: Scheduled restart job, restart counter is at - debian bullseye

I've upgraded my server from stretch to buster and then to bullseye and from this time i have some problems with mariadb server which is restarting often. While is restarting my emails doesnt work cuz of looking for virtual table etc...
mariadb version is
mariadb -v
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 9135
Server version: 10.5.15-MariaDB-0+deb11u1-log Debian 11
then in syslog i can see this
cat /var/log/syslog |grep "mariadb.service"
Aug 19 10:34:45 srv systemd[1]: mariadb.service: Main process exited, code=killed, status=6/ABRT
Aug 19 10:34:45 srv systemd[1]: mariadb.service: Failed with result 'signal'.
Aug 19 10:34:50 srv systemd[1]: mariadb.service: Scheduled restart job, restart counter is at 401.
dont know how to resolve this problem, maybe i should reinstall mariadb and purge before all packets of mariadb and mysql?
mariadb is restarting 6-10 times in one hour
in syslog i can see this interesting
Aug 19 10:34:45 srv mariadbd[22091]: 2022-08-19 10:34:45 11917 [ERROR] [FATAL] InnoDB: Page old data size 15870 new data size 8280, page old max ins size 36 new max ins size 7626
Aug 19 10:34:45 srv mariadbd[22091]: 220819 10:34:45 [ERROR] mysqld got signal 6 ;
Aug 19 10:34:45 srv mariadbd[22091]: This could be because you hit a bug. It is also possible that this binary
Aug 19 10:34:45 srv mariadbd[22091]: or one of the libraries it was linked against is corrupt, improperly built,
Aug 19 10:34:45 srv mariadbd[22091]: or misconfigured. This error can also be caused by malfunctioning hardware.
Aug 19 10:34:45 srv mariadbd[22091]: To report this bug, see https://mariadb.com/kb/en/reporting-bugs
Aug 19 10:34:45 srv mariadbd[22091]: We will try our best to scrape up some info that will hopefully help
Aug 19 10:34:45 srv mariadbd[22091]: diagnose the problem, but since we have already crashed,
Aug 19 10:34:45 srv mariadbd[22091]: something is definitely wrong and this may fail.
Aug 19 10:34:45 srv mariadbd[22091]: Server version: 10.5.15-MariaDB-0+deb11u1-log
Aug 19 10:34:45 srv mariadbd[22091]: key_buffer_size=792723456
Aug 19 10:34:45 srv mariadbd[22091]: read_buffer_size=131072
Aug 19 10:34:45 srv mariadbd[22091]: max_used_connections=15
Aug 19 10:34:45 srv mariadbd[22091]: max_threads=2002
Aug 19 10:34:45 srv mariadbd[22091]: thread_count=12

How to restart meteor by systemd

I need to use systemd to restart the meteor as a service, so I create cloud.service in /etc/systemd/system/. The file looks like below,
[Unit]
Description=cloud
After=network.target
[Service]
User=someone
Type=simple
WorkingDirectory=/home/someone/cloud/
ExecStart=/home/someone/cloud/start.sh
Restart=always
[Install]
WantedBy=multi-user.target
and in the start.sh, it looks like
nohup meteor &
But when the system restarts, the service cannot start.
● cloud.service - cloud
Loaded: loaded (/etc/systemd/system/cloud.service; enabled; vendor preset: enabled)
Active: failed (Result: start-limit-hit) since Fri 2018-11-30 03:22:51 UTC; 13min ago
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Unit entered failed state.
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Failed with result 'exit-code'.
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Service hold-off time over, scheduling restart.
Nov 30 03:22:51 cloud-euro systemd[1]: Stopped cloud.
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Start request repeated too quickly.
Nov 30 03:22:51 cloud-euro systemd[1]: Failed to start cloud.
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Unit entered failed state.
Nov 30 03:22:51 cloud-euro systemd[1]: cloud.service: Failed with result 'start-limit-hit'.
I've tried to use Type=forking, but the situation doesn't change. Any suggestions?

Error installing Apache Cloudstack management on Ubuntu 16.0.4

I am installing Apache cloudstack on ubuntu 16.0.4, but after installing cloudstack setup when I start services of cloudstack management service it displayed the following errors. (I have installed tomcat7, but tomcat 6 is not installed)
Warning: cloudstack-management.service changed on disk. Run 'systemctl daemon-reload' to reload units. Job for
cloudstack-management.service failed because the control process
exited with error code. See "systemctl status
cloudstack-management.service" and "journalctl -xe" for details.
I have checked systemctl status cloudstack-management.service command and it displays the following:
cloudstack-management.service - LSB: Start Tomcat (CloudStack).
Loaded: loaded (/etc/init.d/cloudstack-management; bad; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2018-08-25 21:53:07 IST; 1min 11s ago
Docs: man:systemd-sysv-generator(8)
Process: 26684 ExecStart=/etc/init.d/cloudstack-management start (code=exited, status=1/FAILURE)
Aug 25 21:53:07 dhaval-pc systemd[1]: Starting LSB: Start Tomcat (CloudStack)....
Aug 25 21:53:07 dhaval-pc cloudstack-management[26684]: * cloudstack-management is not installed
Aug 25 21:53:07 dhaval-pc systemd[1]: cloudstack-management.service: Control process exited, code=exited status=1
Aug 25 21:53:07 dhaval-pc systemd[1]: Failed to start LSB: Start Tomcat (CloudStack)..
Aug 25 21:53:07 dhaval-pc systemd[1]: cloudstack-management.service: Unit entered failed state.
Aug 25 21:53:07 dhaval-pc systemd[1]: cloudstack-management.service: Failed with result 'exit-code'.
Warning: cloudstack-management.service changed on disk. Run 'systemctl daemon-reload' to reload units.
What change can I make in vi /etc/init.d/cloudstack-management file?

Kibana - process not starting- log not clear

Running on Ubuntu 16.4
Elastic version: 6.2.4
Kibana version: 6.2.4
Elastic is up and running on port 9200.
Kibana suddenly stopped working, I am trying to run the start command: sudo systemctl start kibana.service and I get the following error in the service stdout(journalctl -fu kibana.service):
Started Kibana.
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Main process exited, code=exited, status=1/FAILURE
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Unit entered failed state.
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Failed with result 'exit-code'.
Aug 27 12:54:33 ubuntuserver systemd[1]: kibana.service: Service hold-off time over, scheduling restart.
Aug 27 12:54:33 ubuntuserver systemd[1]: Stopped Kibana.
No details on this log.
My yaml configuration has only this props:
server.port: 5601
server.host: "0.0.0.0"
I also have tried writing to a log file(hoping for more info there), I tried adding this configurations:
logging.dest: /var/log/kibana/kibana.log
logging.verbose: true
I gave the folder/file full access control but nothing is being written there(still writing to the stdout)

hubot shell not loading scripts and packages

Hubot works when I have an adapter, but when I try to do local development using Shell none of the scripts or packages are loaded.
This works:
root#dev:~/hubot# bin/hubot forever start -w --watchDirectory ${PWD} --watchIgnore ${PWD}/*.log --pidfile ${PWD}/hubot.pid -l ${PWD}/hubot.log -a -c coffee node_modules/.bin/hubot --adapter slack
Strangely, when I try to do local development and testing with:
root#dev:~/hubot# bin/hubot
[Thu Apr 07 2016 00:33:10 GMT+0000 (UTC)] INFO hubot-redis-brain: Using default redis on localhost:6379
eve>
eve> help
usage:
history
exit, \q - close shell and exit
help, \? - print this usage
clear, \c - clear the terminal screen
None of my scripts or modules are showing when I type help in shell. But when I do this in slack, I see all the available scripts and modules:
eve <user> doesn't have <role> role - Removes a role from a user
eve <user> has <role> role - Assigns a role to a user
eve adapter - Reply with the adapter
eve delete reminder <action> - Delete reminder matching <action> (exact match required)
eve deploy <gitsha> to production - Runs Jenkins Phase 1 deployment.
eve echo <text> - Reply back with <text>
eve flip production pools - Flips the yin and yang production pools
eve help - Displays all of the help commands that Hubot knows about.
eve help <query> - Displays all help commands that match <query>.
eve list jobs - List current cron jobs
eve new job "<crontab format>" <message> - Schedule a cron job to say something
eve new job <crontab format> "<message>" - Ditto
eve new job <crontab format> say <message> - Ditto
eve ping - Reply with pong
** Update 1: I turned on debug level logging, and I can see the scripts are being parsed, but the scripts aren't available to me when executing their commands :(**
[Thu Apr 07 2016 00:46:44 GMT+0000 (UTC)] DEBUG Loading adapter shell
eve> [Thu Apr 07 2016 00:46:44 GMT+0000 (UTC)] DEBUG Loading scripts
from /root/hubot/scripts [Thu Apr 07 2016 00:46:44 GMT+0000 (UTC)]
DEBUG Parsing help for /root/hubot/scripts/example.coffee [Thu Apr 07
2016 00:46:44 GMT+0000 (UTC)] DEBUG Parsing help for
/root/hubot/scripts/prod_deploy.coffee [Thu Apr 07 2016 00:46:44
GMT+0000 (UTC)] DEBUG Parsing help for
/root/hubot/scripts/remindme.coffee [Thu Apr 07 2016 00:46:44 GMT+0000
(UTC)] DEBUG Parsing help for /root/hubot/scripts/team_tools.coffee
[Thu Apr 07 2016 00:46:44 GMT+0000 (UTC)] DEBUG Parsing help for
/root/hubot/scripts/update.coffee [Thu Apr 07 2016 00:46:44 GMT+0000
(UTC)] DEBUG Loading scripts from /root/hubot/src/scripts [Thu Apr 07
2016 00:46:44 GMT+0000 (UTC)] DEBUG Loading hubot-scripts from
/root/hubot/node_modules/hubot-scripts/src/scripts [Thu Apr 07 2016
00:46:44 GMT+0000 (UTC)] DEBUG Loading external-scripts from npm
packages [Thu Apr 07 2016 00:46:45 GMT+0000 (UTC)] DEBUG Parsing help
for /root/hubot/node_modules/hubot-diagnostics/src/diagnostics.coffee
[Thu Apr 07 2016 00:46:45 GMT+0000 (UTC)] INFO hubot-redis-brain:
Using default redis on localhost:6379 [Thu Apr 07 2016 00:46:45
GMT+0000 (UTC)] DEBUG Parsing help for
/root/hubot/node_modules/hubot-redis-brain/src/redis-brain.coffee [Thu
Apr 07 2016 00:46:45 GMT+0000 (UTC)] DEBUG Parsing help for
/root/hubot/node_modules/hubot-auth/src/auth.coffee [Thu Apr 07 2016
00:46:45 GMT+0000 (UTC)] DEBUG Parsing help for
/root/hubot/node_modules/hubot-help/src/help.coffee [Thu Apr 07 2016
00:46:45 GMT+0000 (UTC)] DEBUG Parsing help for
/root/hubot/node_modules/hubot-cron/src/scripts/cron.coffee
Update 2: I realized part of my problem was that in the bin/hubot file explicitly has my bot name as eve, while I have been trying it with evedev, my development hubot name. However, I'm still wondering why the help command does not show all the available commands, but when I'm in Slack, it does.
As answer to your Update 2, you need to put the bot's name before you type help, like this:
myhubot> help
usage:
history
exit, \q - close shell and exit
help, \? - print this usage
clear, \c - clear the terminal screen
vs
myhubot> myhubot help
myhubot> Shell: myhubot adapter - Reply with the adapter
myhubot animate me <query> - The same thing as `image me`, except adds a few parameters to try to return an animated GIF instead.
myhubot echo <text> - Reply back with <text>
myhubot help - Displays all of the help commands that Hubot knows about.
myhubot help <query> - Displays all help commands that match <query>.
myhubot image me <query> - The Original. Queries Google Images for <query> and returns a random top result.
myhubot map me <query> - Returns a map view of the area returned by `query`.
myhubot mustache me <url|query> - Adds a mustache to the specified URL or query result.
myhubot ping - Reply with pong
myhubot pug bomb N - get N pugs
myhubot pug me - Receive a pug
myhubot the rules - Make sure hubot still knows the rules.
myhubot time - Reply with current time
myhubot translate me <phrase> - Searches for a translation for the <phrase> and then prints that bad boy out.
myhubot translate me from <source> into <target> <phrase> - Translates <phrase> from <source> into <target>. Both <source> and <target> are optional
ship it - Display a motivation squirrel
You can check your current robot name in bin/hubot
Looks like exec node_modules/.bin/hubot --name "botname" "$#"
With above setting the bot name will be botname

Resources