I am going through https://github.com/influxdata/telegraf/tree/master/plugins/outputs/file
But there is no option to rotate the log file.
This is causing huge log files to be created which have to be deleted manually.
Once deleted manually, telegraf does not recreate that file and only option is to restart telegraf.
I do not want to rotate the log file with a cron job because telegraf may be in the middle of doing something with the log file and as per our use-case, we need to have last 10 minutes of telegraf output with metrics being sent by telegraf every minute.
Seems like someone started in this direction, but never completed it.
https://github.com/influxdata/telegraf/issues/1550
Please update telegraf to newer version 1.12.x, they support rotation on both output file plugin and agent log
[[outputs.file]]
files = ["stdout", "/tmp/metrics.out"]
rotation_interval = "24h"
rotation_max_archives = 10
data_format = "influx"
[agent]
...
debug = false
quiet = false
logfile = "/var/log/telegraf/telegraf.log"
logfile_rotation_interval = "24h"
logfile_rotation_max_archives = -1
...
Related
I'm using the gmailr package for sending emails from a r script.
Locally it's all working fine, but when I try to run this during a docker build step in google cloud I'm getting an error.
I implemented it in the following way described here.
So basically, locally the part of my code for sending emails looks like this:
gm_auth_configure(path = "credentials.json")
gm_auth(email = TRUE, cache = "secret")
gm_send_message(buy_email)
Please note, that I renamed the .secret folder to secret, because I want to deploy my script with docker in gcloud and didn't want to get any unexpected errors due to the dot in the folder name.
This is the code, which I'm now trying to run in the cloud:
setwd("/home/rstudio/")
gm_auth_configure(path = "credentials.json")
options(
gargle_oauth_cache = "secret",
gargle_oauth_email = "email.which.was.used.to.get.secret#gmail.com"
)
gm_auth(email = "email.which.was.used.to.get.secret#gmail.com")
When running this code in a docker build process, I'm receiving the following error:
Error in gmailr_POST(c("messages", "send"), user_id, class = "gmail_message", :
Gmail API error: 403
Request had insufficient authentication scopes.
Calls: gm_send_message -> gmailr_POST -> gmailr_query
I can reproduce the error locally, when I do not check the
following box.
Therefore my first assumption is, that the secret folder is not beeing pushed correctly in the docker build process and that the authentication tries to authenticate again, but in a non interactive-session the box can't be checked and the error is thrown.
This is the part of the Dockerfile.txt, where I'm pushing the files and running the script:
#2 ADD FILES TO LOCAL
COPY . /home/rstudio/
WORKDIR /home/rstudio
#3 RUN R SCRIPT
CMD Rscript /home/rstudio/run_script.R
and this is the folder, which contains all files / folders beeing pushed to the cloud.
My second assumption is, that I have to somehow specify the scope to use google platform for my docker image, but unfortunately I'm no sure where to do that.
I'd really appreciate any help! Thanks in advance!
For anyone experiencing the same problem, I was finally able to find a solution.
The problem is that GCE auth is set by the "gargle" package, instead of using the "normal user OAuth flow".
To temporarily disable GCE auth, I'm using the following piece of code now:
library(gargle)
cred_funs_clear()
cred_funs_add(credentials_user_oauth2 = credentials_user_oauth2)
gm_auth_configure(path = "credentials.json")
options(
gargle_oauth_cache = "secret",
gargle_oauth_email = "sp500tr.cloud#gmail.com"
)
gm_auth(email = "email.which.was.used.for.credentials.com")
cred_funs_set_default()
For further references see also here.
So I enabled error log and slow query log on MariaDB and I could see the data in log files. But after a couple of hours when I checked back again then they were empty. It is showing both log files size as 0 and now it is not creating new logs.
Machine has not been restarted during this time. Why all of sudden both error and slow query logs are now empty?
OS is Debian.
The error log rarely has anything.
The slowlog needs several settings to really get it "ON":
log_output = FILE
slow_query_log = ON
slow_query_log_file = (fullpath to some file)
long_query_time = 1
log_slow_admin_statements = ON -- (optional)
log_queries_not_using_indexes = OFF -- (optional)
Explanation and more discussion, plus recommendation on digesting the results: http://mysql.rjweb.org/doc.php/mysql_analysis#slow_queries_and_slowlog
I'm working with a script in python that will first create mload or fastexport files depending on parameters and then within the script I start those files to insert/export data.
result = subprocess.run(["{0} < {1}".format(file_type, file_path)],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True)
example: fexp < .../conf.fx
This however creates logs in /logs folder depending on the application: either MloadUpload.log or FExpDownload.log
Is there a parameter with which I could disable creation of log files by multiload and fastexport?
Can I do it from command line or within the config file?
I can't find an option for this anywhere :/
Currently, all my syslogs are logged into /var/log/syslog , And I want local1.* to go to separate log file.
My rsyslog.d/50-default.conf file looks as follows:
# Default rules for rsyslog.
#
# For more information see rsyslog.conf(5) and /etc/rsyslog.conf
#
# First some standard log files. Log by facility.
#
auth,authpriv.* /var/log/auth.log
*.*;auth,authpriv.none -/var/log/syslog
#cron.* /var/log/cron.log
#daemon.* -/var/log/daemon.log
kern.* -/var/log/kern.log
#lpr.* -/var/log/lpr.log
mail.* -/var/log/mail.log
#user.* -/var/log/user.log
local1.* -/mnt/log/*/*/python/syslog-analytics.log
But, syslog-analytics.log file is not getting created in the desired location.
Later, I also created that file manually with syslog:adm as the user:group, but still all my logs are logged into /var/log/syslog.
I had the same problem - rsyslogd was creating files under /var/log/ and /tmp/ but refused to create files in /data/log/ or log into those files when I created them manually. File systems permissions were never an issue, everything was world-writeable. This almost drove me mad until I found out that I am on an selinux enforced system (CentOS) where some processes are restricted to certain file system locations.
If you don't care for selinux or want to check if this is the cause of your problems simply turn it off by setting SELINUX=disabled in /etc/selinux/config (needs a reboot!).
I'm writing a java client that uploads a file to Virtuoso WebDAV repository through HTTP PUT.
There is a bug in my code I'm trying to fix. I want to check the log file for the WebDAV server as the bug is relating to HTTP connection reset.
I found the description of the WebDAV server configuration in the Virtuoso configuration file virtuoso.ini.
[HTTPServer]
ServerPort = 8890
ServerRoot = ../vsp
DavRoot = DAV
EnabledDavVSP = 0
HTTPProxyEnabled = 0
TempASPXDir = 0
DefaultMailServer = localhost:25
MaxClientConnections = 5
MaxKeepAlives = 10
KeepAliveTimeout = 10
MaxCachedProxyConnections = 10
ProxyConnectionCacheTimeout = 15
HTTPThreadSize = 280000
HttpPrintWarningsInOutput = 0
Charset = UTF-8
;HTTPLogFile = logs/http.log
The last line tells the http log file should be http.log in the logs folder.
However, I searched all the places in the virtuoso installation directory, and no subdirectory called logs found in the directory.
I also tried the Virtuoso online documentation, but it doesn't help.
I'm a new user of Virtuoso, and really do not have much knowledge of it. Hope someone can help with this.
Doesn't the fact that the last line starts with a semicolon mean that it is commented out? I suggest removing that and restarting Virtuoso.
You just need to change to
HTTPLogFile = http.log
and it will create a file http.log in the same directory from where the virtuoso instance was started or where virtuoso.ini is located