So I enabled error log and slow query log on MariaDB and I could see the data in log files. But after a couple of hours when I checked back again then they were empty. It is showing both log files size as 0 and now it is not creating new logs.
Machine has not been restarted during this time. Why all of sudden both error and slow query logs are now empty?
OS is Debian.
The error log rarely has anything.
The slowlog needs several settings to really get it "ON":
log_output = FILE
slow_query_log = ON
slow_query_log_file = (fullpath to some file)
long_query_time = 1
log_slow_admin_statements = ON -- (optional)
log_queries_not_using_indexes = OFF -- (optional)
Explanation and more discussion, plus recommendation on digesting the results: http://mysql.rjweb.org/doc.php/mysql_analysis#slow_queries_and_slowlog
Related
I am going through https://github.com/influxdata/telegraf/tree/master/plugins/outputs/file
But there is no option to rotate the log file.
This is causing huge log files to be created which have to be deleted manually.
Once deleted manually, telegraf does not recreate that file and only option is to restart telegraf.
I do not want to rotate the log file with a cron job because telegraf may be in the middle of doing something with the log file and as per our use-case, we need to have last 10 minutes of telegraf output with metrics being sent by telegraf every minute.
Seems like someone started in this direction, but never completed it.
https://github.com/influxdata/telegraf/issues/1550
Please update telegraf to newer version 1.12.x, they support rotation on both output file plugin and agent log
[[outputs.file]]
files = ["stdout", "/tmp/metrics.out"]
rotation_interval = "24h"
rotation_max_archives = 10
data_format = "influx"
[agent]
...
debug = false
quiet = false
logfile = "/var/log/telegraf/telegraf.log"
logfile_rotation_interval = "24h"
logfile_rotation_max_archives = -1
...
I am running owncloudcmd to sync files from a local* path to an ownCloud/Nextcloud server, all running Debian 8. However it fails with the error:
[5] csync_statedb_query sqlite3_compile error: disk I/O error - on
query PRAGMA quick_check; [6] csync_statedb_load ERR: sqlite3
integrity check failed - bail out: disk I/O error. #### ERROR during
csync_update : "CSync failed to load the journal file. The journal
file is corrupted."
I am not very familiar with csync or sqlite so I am a bit in the dark and although I can find talk of this issue through googling, I can't find a fix. The data in this case can be dumped to start over so I'm happy to flush any database or anything else. I've trying removing the created csync and journal files assuming one of them was corrupted but it doesn't seem to change anything.
I have read talk about changing PRAGMA settings to ignore the error (or check) but I can't see how this is implemented either.
Is anyone able to show me how to clear out the corruption?
*the local file is a mounted path to an AWS S3 bucket but I think this is irrelevant because it is working on other systems fine.
I have a corrupted Lucene index. If I run "CheckIndex -fix" the problem is resolved, but as soon as I restart tomcat it becomes corrupted again.
The index directory is shared between two application servers running Liferay-Tomcat. I am fixing the index on 1 server and restarting that whilst the other is running. This is a production environment so I cannot bring them both down.
Any suggestions please?
Before fix, CheckIndex says:
Opening index # /usr/local/tomcat/liferay/lucene/0
Segments file=segments_5yk numSegments=1 version=FORMAT_SINGLE_NORM_FILE [Lucene 2.2]
1 of 1: name=_2vg docCount=31
compound=false
hasProx=true
numFiles=8
size (MB)=0.016
no deletions
test: open reader.........FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.io.IOException: read past EOF
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:151)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
at org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:335)
at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:71)
at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(SegmentReader.java:119)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:652)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:605)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:491)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
WARNING: 1 broken segments (containing 31 documents) detected
WARNING: would write new segments file, and 31 documents would be lost, if -fix were specified
If you access your search index with more than one application server, I would suggest integrating a Solr Server. So you don't have the problem that 2 app servers are trying to write on the same file. This could be error prone as you already found out.
To get Solr up and running you have to follow those steps:
Install a Solr Server on any machine you like. A machine running only Solr would be quite preferable.
Install the Solr search portlet in Liferay
Adjust the config files according to the setup document of Sol Search portlet.
Here are some additional links:
http://www.liferay.com/de/marketplace/-/mp/application/15193648
http://www.liferay.com/de/community/wiki/-/wiki/Main/Pluggable+Enterprise+Search+with+Solr
Getting this error soon after running riak start despite a config file that should be working correctly.
Turns out that this is a limit of Riak's error messaging: you will get the above message if you try to do a riak-admin test on your setup before the configuration has finished loading.
I encountered the same problem while starting new Riak clusters over and over again during automated testing. My solution was, in my test fixture setup, to execute code that keeps trying to put an object into a Riak bucket and then eventually succeeding.
Granted, my solution here is an Erlang snippet but it generally solves this problem in lieu of any Riak-supplied admin/wait functions. But since I've used a number of different Riak versions this technique here seems to work for all of them.
wait_for_riak() ->
{ok, C} = riak:local_client(),
io:format("Waiting for Raik..."),
wait_for_riak(C),
io:format("and had a successful put.~n").
wait_for_riak(C) ->
Strawman = riak_object:new(<<"test">>, <<"strawman">>, []),
case C:put(Strawman, 1) of
ok ->
ok;
_Error ->
receive after 1000 -> ok end,
wait_for_riak(C)
end.
adding sleep 4 like so:
brew install riak
riak start
sleep 4
riak-admin test
should help
This is making me kind of crazy: I did a mysqldump of a partitioned table on one server, moved the resulting SQL dump to another server, and attempted to run the insert. It fails, but I'm having difficulty figuring out why. Google and the MySQL forums and docs have not been much help.
The failing query looks like this (truncated for brevity and clarity, names changed to protect the innocent):
CREATE TABLE `my_precious_table` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`somedata` varchar(20) NOT NULL,
`aTimeStamp` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
PRIMARY KEY (`id`,`aTimeStamp`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 DATA DIRECTORY='/opt/data/data2/data_foo/' INDEX DIRECTORY='/opt/data/data2/idx_foo/'
/*!50100 PARTITION BY RANGE (year(aTimeStamp)) SUBPARTITION BY HASH ( TO_DAYS(aTimeStamp))
(PARTITION p0 VALUES LESS THAN (2007) (SUBPARTITION foo0 DATA DIRECTORY = '/opt/data/data2/data_foo' INDEX DIRECTORY = '/opt/data/data2/idx_foo' ENGINE = MyISAM),
PARTITION p1 VALUES LESS THAN (2008) (SUBPARTITION foo1 DATA DIRECTORY = '/opt/data/data2/data_foo' INDEX DIRECTORY = '/opt/data/data2/idx_foo' ENGINE = MyISAM),
PARTITION p2 VALUES LESS THAN (2009) (SUBPARTITION foo2 DATA DIRECTORY = '/opt/data/data2/data_foo' INDEX DIRECTORY = '/opt/data/data2/idx_foo' ENGINE = MyISAM),
PARTITION p3 VALUES LESS THAN MAXVALUE (SUBPARTITION foo3 DATA DIRECTORY = '/opt/data/data2/data_foo' INDEX DIRECTORY = '/opt/data/data2/idx_foo' ENGINE = MyISAM)) */;
The error is:
ERROR 1 (HY000): Can't create/write to file '/opt/data/data2/idx_foo/my_precious_table#P#p0#SP#foo0.MYI' (Errcode: 13)
"Can't create/write to file" looked like a permissions issue to me, but permissions on the targeted folders look thus:
drwxrwxrwx 2 mysql mysql 4096 Dec 1 16:24 data_foo
drwxrwxrwx 2 mysql mysql 4096 Dec 1 16:25 idx_foo
For kicks, I've tried chowning to root:root and myself. This did not fix the issue.
Source MySQL server is version 5.1.22-rc-log. Destination server is 5.1.29-rc-community. Both are running on recent CentOS installations.
Edit: A little more research shows that Errcode 13 is, in fact, a permissions error. But how can I get that on rwxrwxrwx?
Edit: Bill Karwin's excellent suggestion didn't pan out. I'm working as the root user, and have all privilege flags set.
Edit: Creating the table WITHOUT specifying data directories for the individual partitions works - but I need to put these partitions on a larger disk than the one on which this MySQL instance puts tables by default. And I can't just specify the DATA/INDEX DIRECTORY at the table level - that's not legit in the version of MySQL I'm using (5.1.29-rc-community).
Edit: Finally came across the answer, thanks to the MySQL mailing list and internal IT staff. See below.
On Ubuntu look into the apparmor settings for mysql
vi /etc/apparmor.d/usr.sbin.mysql
This should solve the permission issues. For a quick test you can even try
/etc/init.d/apparmor stop
But don't forget to restart the service.
This took me some time to figure out. And after reading "SELinux" it was clear that I have forgotten this new kind of protection on Ubuntu.
http://bugs.mysql.com/bug.php?id=19557
You will also receive an error message
of the MySQL user ID running the query
does not have "DATA FILE" privileges
that allows the user ID to write to
the file system.
In other words, it can be a permission problem with respect to SQL privileges, not operating system file permissions.
It turned out to be an SElinux issue - all my filesystem permissions were fine, but there was a higher-level policy set against MySQL accessing that disk partition.
Lesson: When you have a permissions issue but ownership and filesystem permissions are obviously correct, look to SElinux.