how to write nginx log to the TDengine? - nginx

We want to write nginx logs to the TDengine, we tried nginx -> vector -> kafka -> tdengine without success, we can only do the basic database creation completed, but can not write data to the data table, the prompt is as follows.
[2022-05-28 19:25:09,454] ERROR [nginx-tdengine|task-0] WorkerSinkTask{id=nginx-tdengine-0} RetriableException from SinkTask: (org.apache. kafka.connect.runtime.WorkerSinkTask:600)
RetriableException: java.sql.SQLException: Exception chain:
SQLException: TDengine ERROR (80000221): Invalid JSON format
The TDengine version is 2.4.0.14
We refer to the documentation at https://docs.taosdata.com/third-party/kafka.
I wonder what the problem is and if there is a more reasonable solution?

Related

Telegraf outputs.sql issue

"[[outputs.sql]]" I collected it with mysql.
"[[inputs.system]]" I tried it.
Error writing to outputs.sql: execution failed: Error 1054: Unknown column 'uptime' in 'field list'.
There's a log like this.
Except for fielddrop.
How do we solve this?

TaskscheduleR job returns fetch.write_memory error but works as a local job in RStudio on a virtual machine

I'm trying to set up an automated job that builds an csv file (by pulling aggregates from multiple MySQL databases) and sends an email out on a daily schedule. The entire script works as a local job (via RStudio jobs v1.2.1335) if the script is run through RStudio normally. However, when the job is automated through taskscheduleR addin, it returns the following error:
Error in curl::curl_fetch_memory(url, handle = handle) :
Could not resolve host: .domo.com
Calls: <Anonymous> ... request_fetch -> request_fetch.write_memory -> <Anonymous>
Execution halted
My guess is that my DomoR package is masking the fetch function multiple times:
Welcome to DomoR
Attaching package: 'DomoR'
The following object is masked from 'package:RMySQL':
fetch
The following object is masked from 'package:DBI':
fetch
But I'm not entirely sure if that is the issue. I am running this through an aws EC2 instance running Microsoft Server 2019.
I found out that my username and password was not being used in the automated job. It seems that your .Renviron file will not be pulled properly in a EC2 automated job.
You can solve this problem by getting your .Renviron and loading it in as a dataframe. Then calling in the credentials.

Get data from OpenDap server that requires authentication using R

I'm trying to get data from an OPeNDAP server using R and the ncdf4 package. However, the nasa eosdis server requires username / password. How can I pass this info using R?
Here is what I'm trying to do:
require(ncdf4)
f1 <- nc_open('https://disc2.gesdisc.eosdis.nasa.gov/opendap/TRMM_L3/TRMM_3B42.7/2018/020/3B42.20180120.15.7.HDF')
And the error message:
Error in Rsx_nc4_get_vara_double: NetCDF: Authorization failure syntax
error, unexpected WORD_WORD, expecting SCAN_ATTR or SCAN_DATASET or
SCAN_ERROR context: HTTP^ Basic: Access denied. Var: nlat Ndims: 1
Start: 0 Count: 400 Error in ncvar_get_inner(d$dimvarid$group_id,
d$dimvarid$id, default_missval_ncdf4(), : C function
R_nc4_get_vara_double returned error
I tried the url https://username:password#disc2.... but that did not work also.
Daniel,
The service you are accessing is using third-party redirection to authenticate users. Therefore the simple way of providing credentials in the URL doesn't work.
You need to create 2 files.
A .dodsrc file (a RC file for the netcdf-c library) with the following content
HTTP.COOKIEFILE=.cookies
HTTP.NETRC=.netrc
A .netrc file, in the location referenced in the .dodsrc, with your credentials:
machine urs.earthdata.nasa.gov
login YOURUSERNAMEHERE
password YOURPASWORDHERE
You can find more details at
https://www.unidata.ucar.edu/software/netcdf/docs/md__Users_wfisher_Desktop_v4_86_81-prep_netcdf-c_docs_auth.html
Regards
Antonio
unfortunately, even after defining the credentials and their location
ncdf4::nc_open("https://gpm1.gesdisc.eosdis.nasa.gov/opendap/GPM_L3/GPM_3IMERGDE.06/2020/08/3B-DAY-E.MS.MRG.3IMERG.20200814-S000000-E235959.V06.nc4")
still returns
Error in Rsx_nc4_get_vara_double: NetCDF: Authorization failure
The same happens when using ncdump from a terminal:
$ ncdump https://gpm1.gesdisc.eosdis.nasa.gov/opendap/GPM_L3/GPM_3IMERGDE.06/2020/08/3B-DAY-E.MS.MRG.3IMERG.20200814-S000000-E235959.V06.nc4
returns
syntax error, unexpected WORD_WORD, expecting SCAN_ATTR or SCAN_DATASET or
SCAN_ERROR context: HTTP^ Basic: Access denied. NetCDF: Authorization
failure Location: file
/build/netcdf-KQb2aQ/netcdf-4.6.0/ncdump/vardata.c; line 473

elk kibana cannot create index pattern

I hava a ELK running, which is pretty ok. But today, I got an exception when trying to create new index pattern.
To solve this issue, I have deleted .kibana index, and .monitoring-kibana-6-xxx indexices.
I also tried to create index pattern by command line (Create index-patterns from console with Kibana 6.0). But I could not set the default index pattern. So I still need to create or set index from UI.
Error: 413 Response
at http://staging.alct56.club/bundles/kibana.bundle.js?v=16070:231:21272
at processQueue (http://staging.alct56.club/bundles/commons.bundle.js?v=16070:39:9912)
at http://staging.alct56.club/bundles/commons.bundle.js?v=16070:39:10805
at Scope.$digest (http://staging.alct56.club/bundles/commons.bundle.js?v=16070:39:21741)
at Scope.$apply (http://staging.alct56.club/bundles/commons.bundle.js?v=16070:39:24520)
at done (http://staging.alct56.club/bundles/commons.bundle.js?v=16070:38:9495)
at completeRequest (http://staging.alct56.club/bundles/commons.bundle.js?v=16070:38:13952)
at XMLHttpRequest.xhr.onload (http://staging.alct56.club/bundles/commons.bundle.js?v=16070:38:14690)
finally I found this is blocked by our company http policy. But I still cannot understand why the "response body is too big" causes index pattern cannot be created.

escript: /riak-1.1.2/rel/riak/erts-5.8.5/bin/nodetool

I keep getting this error when trying to run riak commands.
The nodetool file does not exist in that directory. When I copy the nodetool file from 5.8.4 I start getting this error:
{"init terminating in do_boot",{'cannot get bootfile','start_clean.boot'}}
EDIT
I followed this great advice here: http://onerlang.blogspot.co.uk/2009/10/fighting-with-riak.html. Now when I run riak start I get:
Crash dump was written to: erl_crash.dump
init terminating in do_boot ()
Crash dump was written to: erl_crash.dump
init terminating in do_boot ()
Error reading /abc/def/otp_src_R14B03/riak-1.1.2/dev/dev1/etc/app.config
{"init terminating in do_boot",{'cannot get bootfile','start_clean.boot'}}
EDIT 2
I seem to be getting this problem http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-November/006436.html.
Whenever I build from source (required for multiple nodes on the same machine) riak tries to user erts-5.8.5 whereas riak requires(?) erts-5.8.4.
Is it possible for me to tell to not use 5.8.5 and use 5.8.4 maybe?

Resources