I've been away and one of my databases became a massive size that MariaDB had crashed and I cannot get it to restart. I have tried moving the path /var/lib/ and restart to no avail. I am trying to find messages in /var/log/mysql/ and /var/log but there are no error messages to give me a clue.
Can anyone offer some solutions?
MariaDB: 10.0.12
Debian 7.6.
Thanks.
Check for error messages in the syslog. Mariadb 10.0.* send tehm there if you do not change the /etc/mariadb.conf.d/50-mysqld_safe.cnf and comment the 2 lines: skip_error_log and syslog
Related
Every time I go to connect to a remote MariaDB server it just fails and in the logs it says
18:32:43 [ERR][ WBContext]: Unsupported server version: mariadb.org binary distribution 10.5.15-MariaDB-1:10.5.15+maria~focal-log
Is this expected or is something else happening?
I've tried these with some success... I've decided to go with dBeaver for now....when I have more time I'll revisit this.
Automatic upgrade in mid-May.
1.14.10-gke.27 → 1.14.10-gke.36
https://cloud.google.com/kubernetes-engine/docs/release-notes?_ga=2.196679278.-315187236.1572486593#may_13_2020
After that, I got Memorystore(Redis)connection error and crul6 error.
crul error
cURL error 6: Could not resolve host: www.googleapis.com (see https://curl.haxx.se/libcurl/c/libcurl-errors.html)
This problem happens occasionally, not always.
It worked fine before the upgrade.
Role-based access control not use
Workload Identity not use
Please advise.
Close it once.
Probably not a gke version issue.
The reason is that the node was a preemptible node.
That is likely.
When I ran the reboot test of the Pretentive Node, I was able to reproduce the error.
gcloud compute instances simulate-maintenance-event <instance name> --zone <zone>
I'm attempting to enable SSL on hiveserver2.
I can run in the default binary mode fine. http mode works no problem. As soon as I enable SSL through hive-site.xml, i'm faced with the following error.
ERROR [Thread-28] thrift.ThriftCLIService: Error starting HiveServer2: could not start ThriftHttpCLIService
java.net.BindException: Address already in use
There is nothing using any of the ports, prior to starting hiveserver2. Checked with netstat -tupln
Ports i've configured in hive-site.xml are
hive.server2.webui.port 11002
hive.server2.thrift.http.port 11001
hive.server2.thrift.port 11000
and invoking hiveserver2 via the service /opt/hive/bin/hive --service hiveserver2 &
O/S ubuntu (on kubernetes)
Hive version 3.0.0
Any help greatly appreciate. Google search for problems with ThriftHTTPCliService came up short.
For anyone that come across this post.
I upgraded to Hive 3.1.0, along with the metastore schema.
This fixed the issue, although unsure as to the underlying cause.
What is the proper way to run Kibana 4.5 as service on CentOS 7?
When I run it as ./kibana, I can conenct to it form another machine without any problem. When I run it with systemctl start kibana and check with ps -ef | grep '.*node/bin/node.*src/cli'it looks like running but refuses to connect. And goes down. What can be the problem? Thanks in advance.
Here is content of kibana.service file
[Unit]
Description=no description given
[Service]
Type=simple
User=kibana
Group=root
Environment=CONFIG_PATH=/opt/kibana/config/kibana.yml
ExecStart=/opt/kibana/bin/kibana
Restart=always
[Install]
WantedBy=multi-user.target
I am not that much of a linux expert but i recently installed kibana using yum (https://www.elastic.co/guide/en/kibana/4.5/setup.html#kibana-yum) on a minimal installation of CentOS 7 and did not face any issues whatsoever.
In order to have some debug logs and find out what is wrong in your case, edit the kibana configurations file
/opt/kibana/config/kibana.yml
and set a filename for the logging.dest property.
logging.dest: /var/log/kibana.log
Good luck
Igor,
I noticed a few questions you posted on Kafka so sounds like you need to set up a cluster that can ingest data and pass to Elastic. Kibana would be just user interface.
In my experience, components like ELK, Kafka, Zookeeper, etc should be managed by a watchdog process. I highly recommend looking at something like supervisord. http://supervisord.org/
You should run it as a service and the rest managed by the supervisor. It will guarantee starting components at boot but whats more important restart in case of failure and collecting logs. In case of Kibana, it is a NodeJS app that writes to stdout/stderr so to know what fails, you need to collect them.
I've upgraded my chef server. Then I ran chef-server-ctl reconfigure successfully.
However, when I ran chef-server-ctl test, I got error:
Encountered an error attempting to create client pedant_admin_client
Response Code was: 502
502 Bad Gateway
nginx/1.4.4
Can anyone help me?
tl;dr
sudo chef-server-ctl upgrade
WARNING: my org was preserved, but your results may vary
I ran into this too. I suspect an unexpected. "apt-get upgrade". For me all checks pass in status.
chef-server-ctl status
but this failed
sudo chef-server-ctl test
I ran this for more details (beware thousands of lines of output ;- )
sudo chef-server-ctl tail
and found this gem: (note the ".." in the path indicating a path misconfig)
2015-08-24_23:00 mkdir: cannot create directory '/opt/opscode/embedded/service/rabbitmq/sbin/../var': Permission denied
I then ran this and it worked
sudo chef-server-ctl upgrade
Ran into the same thing: Here is what I did to solve the problem.
Chef has logs but a lot of services. Check which one is failing
chef-server-ctl status will indicate what is down.
Go look at that log under
/var/log/chef-server/<problem-service>/current
My particular problem was
2014-09-27_17:33:32.41439 FATAL: could not create shared memory segment: Invalid argument
2014-09-27_17:33:32.41441 DETAIL: Failed system call was shmget(key=5432001, size=4050755584, 03600).
2014-09-27_17:33:32.41442 HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce the request size (currently 4050755584 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
Thus all I needed to do was chef-server-ctl reconfigure
problem solved