I'm learing the concept of telegraf. It's a system that collects data and will write it to influxdb. So I've setup a influxdb, a telegraf and an nginx which will create the logs. Telegraf will collect the logs and write it to influxdb.
But I see a lot of configurations which are also using aerospike. I don't see what aerospike is doing in this conecpt?
I see it configured as input plugin for telegraf:
[[inputs.aerospike]]
servers = ["aerospike:3000"]
Am I wrong with the nginx part and is aerospike providing logs or how do I have to interpret this concept with using nginx, influxdb, telegraf and aerospike
I would assume Aerospike is simply a potential source of logs to be ingested by telegraf...
https://github.com/influxdata/telegraf/tree/master/plugins/inputs/aerospike
nginx is just another potential input plugin... here is the list apparently:
https://github.com/influxdata/telegraf/tree/master/plugins/inputs
Related
Any one Plz answer my question, if u have known the answer:
I have two telegraf agents one in windows system and other in linux centos server. I also have one influxdb database.
Question is: in my windows telegraf config file I have given the db name as telegraf_one and in centos telegraf config file I have given the same db name i.e., telegraf_one.
On my influxdb server I have db telegraf_one created when running windows telegraf, and later I have been starting the telegraf agent in centos, will db conflicts occur due to same db name from different telegraf agents or both telegraf agents use the same db without conflicts.
No conflict will occur. Both agents will report metrics to the same db server under respective os.host() names. This is actually how telegraf is designed to function.
Am new to graphite monitoring tool. I have one question in this setup. I have two servers and hear one server treated as a hosted server(installed graphite,collectd,statsd and grafana)and it grafana displays the all metrics. In the another second server i have installed the graphite and collectd.Now i would need to send this second server collectd informtion to first server(hosted server)and those metrics information will need to display the web using grafana...
could you please suggest me is there any plugin or any way to setup this configuration?
Thanks.
You don't actually need graphite on the second host, you can just configure collectd on that host to write to graphite (actually the carbon ingest api) on the first host.
https://collectd.org/documentation/manpages/collectd.conf.5.shtml#plugin_write_graphite
If you do want to have graphite on both servers for some reason, you can use multiple Node entries in your collectd config to have it send metrics to both graphite instances.
As Kibana is the webUI for elasticsearch, it is better make it high availability. After reading the doc and make a demo, i can not find a way to set up two Kibana instances simultaneously for a single Elasticsearch cluster.
After some deep leaning about Kibana, i finally find that Kibana will store its data and configuration about dashboard and searches in the backend ES. This way Kibana is just like a proxy and ES serves as the DataBase for it.
So, the answer is yes. Kibana supports High Availability through ES.
You could simply change the server.port value to another free port (ie: something like 6602) in your kibana yml since 5601 is the default. Hence you're pointing to the same ES instance and having two (one running on the default port and the other running on port 6602) kibana instances as well.
I'm trying to setup a central logstash configuration. However I would like to be sending my logs through syslog-ng and not third party shippers. This means that my logstash server is accepting via syslog-ng all the logs from the agents.
I then need to install a logstash process that will be reading from /var/log/syslog-clients/* and grabbing all the log files that are sent to the central log server. These logs will then be sent to redis on the same VM.
In theory I need to also configure a second logstash process that will read from redis and start indexing the logs and send them to elasticsearch.
My question:
Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)? Is there any way to just have one logstash configuration and have the process read from syslog-ng ---> write to redis and also read from redis ---> output to elastic search ?
Diagram of my setup:
[client]-------syslog-ng---> [log server] ---syslog-ng <----logstash-shipper ---> redis <----logstash-server ----> elastic-search <--- kibana
Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)?
Yes.
Is there any way to just have one logstash configuration and have the process read from syslog-ng ---> write to redis and also read from redis ---> output to elastic search ?
Not that I have seen yet.
Why would you want this? I have a single machine and remote machine config and they work extremely reliably, with a small footprint. Maybe you could explain your reasoning a bit - I know I would be interested to hear about it.
On my nginx server, I am going to use more than one of the geo ip databases (one for country+city and another one for isp or organization). I could not find a module for nginx and/or pecl to get more than one of these databases to run.
The database provider is not going to publish a single DB with all the data in one file), so it looks like i am lost.
http://wiki.processmaker.com/index.php/Nginx_and_PHP-FPM_Installation seems to work with one DB only.
It's possible with the standard built-in GeoIP nginx module:
http://nginx.org/en/docs/http/ngx_http_geoip_module.html
geoip_country CountryCity.dat;
geoip_city CountryCity.dat;
geoip_org Organization.dat;