Installation of Riak under Ubuntu 14.04 LTS - riak
I cant bring riak to work on Ubuntu 14.04. LTS using the bash instructions under
http://docs.basho.com/riak/latest/ops/building/installing/debian-ubuntu/.
When running riak start I get:
riak failed to start within 15 seconds,
see the output of 'riak console' for more information.
If you want to wait longer, set the environment variable
WAIT_FOR_ERLANG to the number of seconds to wait.
When running riak console afterwards:
Exec: /usr/lib/riak/erts-5.10.3/bin/erlexec -boot /usr/lib/riak/releases/2.1.3/riak -config /var/lib/riak/generated.configs/app.2016.02.28.21.43.04.config -args_file /var/lib/riak/generated.configs/vm.2016.02.28.21.43.04.args -vm_args /var/lib/riak/generated.configs/vm.2016.02.28.21.43.04.args -pa /usr/lib/riak/lib/basho-patches -- console -x
Root: /usr/lib/riak
Erlang R16B02_basho8 (erts-5.10.3) [source] [64-bit] [smp:2:2] [async-threads:64] [kernel-poll:true] [frame-pointer]
[os_mon] memory supervisor port (memsup): Erlang has closed
[os_mon] cpu supervisor port (cpu_sup): Erlang has closed
{"Kernel pid terminated",application_controller,"{application_start_failure,riak_core,{bad_return,{{riak_core_app,start,[normal,[]]},{'EXIT',{{function_clause,[{orddict,fetch,['riak#127.0.0.1',[{'riak#54.194.69.48',[{{riak_core,bucket_types},[true,false]},{{riak_core,fold_req_version},[v2,v1]},{{riak_core,net_ticktime},[true,false]},{{riak_core,resizable_ring},[true,false]},{{riak_core,security},[true,false]},{{riak_core,staged_joins},[true,false]},{{riak_core,vnode_routing},[proxy,legacy]},{{riak_pipe,trace_format},[ordsets,sets]}]}]],[{file,\"orddict.erl\"},{line,72}]},{riak_core_capability,renegotiate_capabilities,1,[{file,\"src/riak_core_capability.erl\"},{line,441}]},{riak_core_capability,handle_call,3,[{file,\"src/riak_core_capability.erl\"},{line,213}]},{gen_server,handle_msg,5,[{file,\"gen_server.erl\"},{line,585}]},{proc_lib,init_p_do_apply,3,[{file,\"proc_lib.erl\"},{line,239}]}]},{gen_server,call,[riak_core_capability,{register,{riak_core,vnode_routing},{capability,[proxy,legacy],legacy,{riak_core,legacy_vnode_routing,[{true,legacy},{false,proxy}]}}},infinity]}}}}}}"}
Any idea how to fix this? Installation has been done via apt-get. Default riak.conf. Riak version is 2.1.3.
This is a Riak error, not at all related to Ubuntu.
The error message indicates that the current name of the node does not match the name of any node in the ring file. This can happen if you start the node with a default configuration before configuring the node's name. See Note on changing the name value at http://docs.basho.com/riak/latest/ops/building/basic-cluster-setup/
If this is a singleton node, the simplest solution will be to delete the files in /var/lib/riak/ring (make a backup first). A new one will be created when you start the node.
Related
Error: Could not find pg_ctl executable for version 11 (PostgreSQL 11) + let's encrypt [closed]
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 2 years ago. Improve this question I have a VPS hosting with a domain redirecting to it. I have LAMP stack for my main website using WordPress CMS. Plus I am using Odoo as my back-end with python and PostgreSQL in a sub-domain. Everything was working fine until I installed Certbot Let’s Encrypt to obtain an SSL certificate by following these tutorials For My Wordpress i installed this plugin: WP Encryption – One Click single / wildcard Free SSL certificate & force HTTPS Which got me in a loop because it forced the https i will explain it later on So when the plugin didn't work i searched for another way for the whole VPS with these tutorials: How To Secure Apache with Let's Encrypt on Ubuntu 16.04 How To Secure Apache with Let's Encrypt on Ubuntu 18.04 After completing the second tutorial for ubuntu 18.04 i noticed that all my domain traffic is going to https and it got stuck in a loop saying same as i said above "ERR_TOO_MANY_REDIRECTS which means Site redirected too many times" and couldn't access the website front-end for the wordpress in the doamin. Then when i applied "Step 3 — Allowing HTTPS Through the Firewall" my internet connection got interpreted and when i got back to the ssh session i found my self locked out of the server and did not find any way to get back in. And when i tired to use the sub-domain that has Odoo on it i have got the same error "ERR_TOO_MANY_REDIRECTS which means Site redirected too many times" Until here i was hopeless and did't know what to do. I contacted my VPS server provider and told him about what exactly happened. Then some how he managed to get me into the server again with a URL to the terminal i still couldn't access the server using ssh clients like putty.. so when i entered the server after he provided me with the URL first thing noticed is that he "rebooted the VPS" will get to this in a second. So first thing i did was removing the wordpress plugin "WP Encryption" and update the wordpress site-url in wp_options table in mysql database because the plugin changed it from http to https so i changed it back and that solved the ERR_TOO_MANY_REDIRECTS for my wordpress website. Then the second thing i did was disabling the ufw firewall that i enabled in the tutorial in Step 3 above. I instantly got my connection to the server back using ssh client putty but what i have noticed again is the postgres service was inactive and went down with the reboot of the VPS. i tried to start the service but it didn't a gave me this error. Failed to start postgresql.service: Unit postgresql.service is masked. i searched for a solution and found these commands to unmask sudo systemctl unmask postgresql sudo systemctl enable postgresql sudo systemctl restart postgresql and then the service has started and everything sames OK when i run the status command service postgresql status the response is ● postgresql.service - LSB: PostgreSQL RDBMS server Loaded: loaded (/etc/init.d/postgresql; generated) Active: active (exited) since Thu 2020-03-26 05:54:09 UTC; 2h 22min ago Docs: man:systemd-sysv-generator(8) Tasks: 0 (limit: 2286) Memory: 0B CGroup: /system.slice/postgresql.service but when i try to connect to postgres through the default port with odoo it says: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432" after many searches i made i found the posgres main cluster is also inactive or down i tried to start it with this command pg_ctlcluster 11 main start but i get this error Job for postgresql#11-main.service failed because the service did not take the steps required by its unit configuration. See "systemctl status postgresql#11-main.service" and "journalctl -xe" for details. and when i run the command as requested systemctl status postgresql#11-main.service i get this error ● postgresql#11-main.service - PostgreSQL Cluster 11-main Loaded: loaded (/lib/systemd/system/postgresql#.service; disabled; vendor preset: enabled) Active: failed (Result: protocol) since Thu 2020-03-26 15:22:15 UTC; 14s ago Process: 18930 ExecStart=/usr/bin/pg_ctlcluster --skip-systemctl-redirect 11-main start (code=exited, status=1/FAILURE) alone with systemd[1]: Starting PostgreSQL Cluster 11-main... postgresql#11-main[18930]: Error: Could not find pg_ctl executable for version 11 systemd[1]: postgresql#11-main.service: Can't open PID file /run/postgresql/11-main.pid (yet?) after start: No such file or systemd[1]: postgresql#11-main.service: Failed with result 'protocol'. systemd[1]: Failed to start PostgreSQL Cluster 11-main. I guessed Let's Encrypt added an ssl configuration to the pg_hba.conf and postgres.conf like id did with apache so i searched for them and commented the "ssl on" lines and restarted postgres service along with the main cluster but nothing happened still the the same error which is Error: Could not find pg_ctl executable for version 11 I know i shouldn't run pg_ctl directly under Ubuntu/Debian. I must use pg_ctlcluster instead, which is installed by postgresql-common. I saw the main page documentation. But when i run "sudo pg_ctlcluster 11 main reload" command i always get the above Error telling me that he could not find pg_ctl executable I have searched a lot for this problem but nothing worked how can i solve the pg_ctl executable in version 11 ?? Ps: I am using Ubuntu 19.10 (GNU/Linux 5.3.0-24-generic x86_64) Odoo 11 with postgres 11 as the database odoo can't connect to postgres as i mentioned before edit: Unfortunately i can't do a restore or recover the server to fix postgres package because my last backup of the server was on 19/3 and today is 26/3 i have an important data between this period Update 27/3/2020 4:06 AM I compared my last server backup with the production server and found a lot of postgres files missing!! like int this path /usr/lib/postgres/11/ and /etc/postgres/11/ i think postgres some how got damaged and lost some files in the reboot of the server >>> but found the data files of the database located in /var/lib/postgres/11/ <<< Can i read them in my backup server ? i will try and let you know
So finally after a hours of digging All PostgreSQL files where damaged and missing and i lost hope of repairing them i don't know what caused that but it has a relation with the accidental reboot of the server. So i managed to find the main cluster data file for my important database information for the production server in this path /var/lib/postgres/11/ and i took a backup from it by zipping the whole folder using this command zip -r main.zip main/ then i did a full purge and reinstall for postgres usuing these commands from here apt-get --purge remove postgresql\* to remove everything PostgreSQL from your system. Just purging the postgres package isn't enough since it's just an empty meta-package. Once all PostgreSQL packages have been removed, run: rm -r /etc/postgresql/ rm -r /etc/postgresql-common/ rm -r /var/lib/postgresql/ userdel -r postgres groupdel postgres Then i installed postgres with this command to match odoo11 sudo apt-get install postgresql libpq-dev -y then creating the ODOO PostgreSQL User sudo su - postgres -c "createuser -s odoo" 2> /dev/null || true Now everything is okay odoo should work fine but you still don't have any database So to bring back the backup from the cluster folder we took earlier we need to move the zip file to the same directory we took it from which is /var/lib/postgres/11/ but before that you should stop postgres service sudo systemctl stop postgresql and make sure it has stopped sudo systemctl status postgresql after that rename the main cluster that postgres uses right now because its empty and we don't need it because we are replacing it with our backed up cluster mv /var/lib/postgres/11/main /var/lib/postgres/11/main_old then move the zip file from where you backed it up to the postgres cluster folder with this command mv /backups/main.zip /var/lib/postgres/11/ unzip the folder in the same path by using this command unzip -a /var/lib/postgres/11/main.zip after unzipping the folder give the ownership to your postgres user and group chown -R postgres:postgres main Then you are good to go. Start Postgres service sudo systemctl start postgresql sudo systemctl status postgresql and make sure you also start the main cluster service pg_ctlcluster 11 main start if you stopped odoo make sure to start it also service odoo-server start Ps: I solved ERR_TOO_MANY_REDIRECTS for the odoo sub-domain by commenting ssl configurations in my odoo.config Apache2 virtual host that lets encrypt updated before and everything got back to where left it before installing lets encrypt. I guess i will leave it here and won't use ssl in production again till i figure out how to use it in a test server .. thanks for your time i hope my question and answer helps someone in the future
Try adding 'pg_path' in your odoo configuration file. Like: pg_path = /path/to/postgresql/binaries Generally '/usr/lib/posrgresql/11/bin' is the binary directory.
How to configure Auto Restart of mariadb when it get killed?
How to auto restart of mariadb goes down when it get killed. As of now i am using crontab which doesnot help as its checking every 1min and not every second is there any solution related to systemd?
When installing from RPM or .deb packages on distributions that have switched to systemd a service is set up by default already. If you are installing from a generic linux binarytarball a mariadb.service file is included in the archive, but needs to be installed manually. See https://mariadb.com/kb/en/library/systemd/ for details. Or you could still use the classic mysqld_safe wrapper script, which starts mysqld for you and will restart it when "death of a child" signals are received, unless mysqld has been shut down gracefully with "mysqladmin shutdown". See: https://mariadb.com/kb/en/library/mysqld_safe/
HAAst terminating with exit code 158
I'm just trying to do a POC test with Telium's HAAst before we offer it to a customer, but I've stalled before I start the haast daemon. Currently I have a single VM with Ubuntu 16.04 LTS with Digium's basic Asterisk 13 installation. I've configured haast.conf, it seems good, but I cannot start haast daemon, it stops after a few seconds. Here is the relevant log output: General, HAAst version 2.3.2.1 starting as daemon under process ID 2409 Controller, Local peer HAAst state changing to service start License, License file not found. Switching to Free Edition General, Settings contained 0 information; 0 warning; and 0 error messages. Asterisk Controller, Unable to located executable to control Asterisk Controller, Local peer HAAst state changing to service stop Controller, Stopped General, HAAst terminating with exit code 158 (failure to find asterisk control files) after running for 2 seconds It seems, haast misses the event controller to start Asterisk daemon, unfortunately it didn't contain the installation package. I've tried to make these files (asterisk.start & asterisk.stop) based on the other sample event files, I've set the executable bit, I've wrote the shebang to the first line based on the installation guide, but nothing helped. Is somebody experienced about this case? Thanks, Zsolt
This error means that High Availability for Asterisk (HAAst) is unable to find the service/executable file needed to control Asterisk. Since the 'distribution' setting in the [asterisk] stanza of the haast.conf file is it to 2 (Digium Asterisk), it means there's a problem with the Asterisk service file. Ubuntu 16 uses systemd so have you installed Digium's asterisk.service (systemd) file? If you chose to install an initd service file for Asterisk instead then you may have to explicitly tell HAAst which to look for. If you installed neither then that's your problem. The maker of HAAst (Telium) has a support forum where this topic is addressed (click here). The pre and post Asterisk event handlers are available in the commercial versions of HAAst only - so that won't help (but it's also the wrong way to solve the problem). There are also a few Ubuntu specific topics on the support forum https://www.telium.io/haast in case that helps. If you can't find an Asterisk systemd service file here's a sample: [Unit] Description=Asterisk PBX and telephony daemon Documentation=man:asterisk(8) Wants=network.target After=network.target [Service] Type=simple User=asterisk Group=asterisk ExecStart=/usr/bin/asterisk -f -C /etc/asterisk/asterisk.conf ExecStop=/usr/bin/asterisk -rx 'core stop now' ExecReload=/usr/bin/asterisk -rx 'core reload' [Install] WantedBy=multi-user.target Just save that file as 'asterisk.service' and place in /etc/systemd/system/ and ensure permissions match other service/unit files.
Haast configuration is missing or not correct: Unable to located executable to control Asterisk
How do I get back to the running instance of riak-shell?
I was in riak-shell when ssh lost its connection to the server. After reconnecting, I do the following: sudo riak-shell and get: An instance of riak-shell is already running So, I restarted the riak node in question. This did not seem to solve the problem. I do not see anything using ps -aux to kill. According to the docs, only one instance can run at a time. That makes sense, but when I run riak-shell from another node and try to connect to any node, I now get the following: Error: invalid function call : connection_EXT:connect ["riak#<<<ip_address_elided>>>"] You can connect to a specific node (whether in your riak_shell.config or not) by typing 'connect "dev1#127.0.0.1";' substituting your node name for dev1. You may need to change the Erlang cookie to do this. See also the 'reconnect' command. Unhandled message received is {#Ref<0.0.0.135>,disconnected} riak-shell(3)> I have not changed the cookies during this process, and the cookie appears to be the same (at least in /etc/riak/riak_shell.config). (I am running the Riak TS AMI on AWS.)
riak-shell runs in its own Erlang VM - entirely separate from the riak node (You don't need to run riak-shell from the machine your node is on - it uses the normal riak-erlang-client to talk to riak) If you you are on a Linux do ps aux | grep riak_shell_app it will give you the process number you need to kill that instance: 08:30:45:~ $ ps aux | grep riak_shell_app vagrant 4671 0.0 0.3 493260 34884 pts/4 Sl+ Aug17 0:03 /home/vagrant/riak_ee/dev/dev1/erts-5.10.3/bin/beam.smp -- -root /home/vagrant/riak_ee/dev/dev1 -progname erl -- -home /home/vagrant -- -boot /home/vagrant/riak_ee/dev/dev1/releases/2.1.1/start_clean -run riak_shell_app boot debug_off /home/vagrant/riak_ee/dev/dev1/bin/../log/riak_shell/riak_shell -noshell -config /home/vagrant/riak_ee/dev/dev1/bin/../etc/riak I wrote a good chunk of it so let me know how you got on: https://github.com/basho/riak_shell/graphs/contributors
openMPI/mpich2 doesn't run on multiple nodes
I am trying to use install openMPI and mpich2 on a multi-node cluster and I am having trouble running on multiple machines in both cases. Using mpich2 I am able to run on an specific host from the head node, but if I try to run something from the compute nodes to a different node I get: HYDU_sock_connect (utils/sock/sock.c:172): unable to connect from "destination_node" to "parent_node" (No route to host) [proxy:0:0#destination_node] main (pm/pmiserv/pmip.c:189): unable to connect to server parent_node at port 56411 (check for firewalls!) If I try to use sge to set up a job I get similar errors. On the other hand, if I try to use openMPI to run jobs, I am not able to run in any remote machine, even from the head node. I get: ORTE was unable to reliably start one or more daemons. This usually is caused by: * not finding the required libraries and/or binaries on one or more nodes. Please check your PATH and LD_LIBRARY_PATH settings, or configure OMPI with --enable-orterun-prefix-by-default * lack of authority to execute on one or more specified nodes. Please verify your allocation and authorities. * the inability to write startup files into /tmp (--tmpdir/orte_tmpdir_base). Please check with your sys admin to determine the correct location to use. * compilation of the orted with dynamic libraries when static are required (e.g., on Cray). Please check your configure cmd line and consider using one of the contrib/platform definitions for your system type. * an inability to create a connection back to mpirun due to a lack of common network interfaces and/or no route found between them. Please check network connectivity (including firewalls and network routing requirements). The machines are connected to each other, I can ping, ssh passwordlessly etc from any of them to any other, MPI_LIB and the PATH are well set in all machines.
Usually this is caused because you didn't set up a hostfile or pass the list of hosts on the command line. For MPICH, you do this by passing the flag -host on the command line, followed by a list of hosts (host1,host2,host3,etc.). mpiexec -host host1,host2,host3 -n 3 <executable> You can also put these in a file: host1 host2 host3 Then you pass that file on the command line like so: mpiexec -f <hostfile> -n 3 <executable> Similarly, with Open MPI, you would use: mpiexec --host host1,host2,host3 -n 3 <executable> and mpiexec --hostfile hostfile -n 3 <executable> You can get more information at these links: MPICH - https://wiki.mpich.org/mpich/index.php/Using_the_Hydra_Process_Manager#Hydra_with_Non-Ethernet_Networks Open MPI - http://www.open-mpi.org/faq/?category=running#mpirun-hostfile