I'm trying to send my symfony2 logs to a logstash server but the server doesn't receive the logs =/
My logstash conf in /etc/logstash/conf.d/logstash.conf is
input {
gelf {
port => "12201"
host => "0.0.0.0"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout {
debug => true
debug_format => "json"
}
}
The files in /var/log/logstash are empty. With monolog, I caught the sended strings :
{"_facility":"request",
"_ctxt_route_parameters":"...",
"_ctxt_request_uri":"...",
"version":"1.0",
"short_message":"...",
"full_message":null,
"host":"dev",
"timestamp":1462196712,
"level":6}
What is wrong with my config ? I tested to get elasticsearch indexes :
curl 'localhost:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana 1 1 1 0 3.1kb 3.1kb
Thanks for helping
Related
I've been following this tutorial on how to use ELK stack for nginx logs.
I've created nginx.conf to configure how to get the logs but when i type: bin/logstash -f /etc/logstash/conf.d/nginx.conf
I get this error:
[ERROR] 2020-11-13 14:59:15.254 [Converge
PipelineAction::Create] agent - Failed to execute action
{:action=>LogStash::PipelineAction::Create/pipeline_id:main,
:exception=>"LogStash::ConfigurationError", :message=>"Expected one of
[A-Za-z0-9_-], [ \t\r\n], "#", "=>" at line 9, column 8 (byte
135) after input{\n\t\n file{\n path =>
["/var/log/nginx/access.log" , "/var/log/nginx/error.log"]\n
type => "nginx"\n }\n filter{\n \n grok",
:backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in
compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:184:in initialize'",
"org/logstash/execution/JavaBasePipelineExt.java:69:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in initialize'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in
execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:365:in block
in converge_state'"]}
and here's my nginx.conf file:
input{
file{
path => ["/var/log/nginx/access.log" , "/var/log/nginx/error.log"]
type => "nginx"
}
filter{
grok{
match => ["message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}"]
overwrite => ["message"]
}
mutate{
convert => ["response","integer"]
convert => ["bytes","integer"]
convert => ["responsetime","float"]
}
geoip{
source => "clientip"
target => "geoip"
add_tag => ["nginx-geoip"]
}
date {
match ⁼> ["timestamp" , "dd/MMM/YYYY:HH:mm:ss Z"]
remove_field => ["timestamp"]
}
useragent {
source => "agent"
}
}
output{
elasticsearch {
hosts => ["localhost:9200"]
index => "nginx-%{+yyyy.MM.dd}"
document_type => "nginx_logs"
}
}
}
I found similar question but the answer didn't help.
Is there anyone familiar with logstash syntax and help figure out my error
Thank you
You are missing a } to close the input section. Insert it before the filter keyword.
Also, remove the last } in the file.
I am using telegram.php to connect my bot. When I use sendmessage all of thing is ok in my logs but I do not receive anything from the bot.
When I check my log there is a problem like this:
ok: False
curl_error_code: 51
curl_error: SSL: no alternative certificate subject name matches target host name 'api.telegram.org'
I donit know what to do to fix it.
I don't know this telegram bot, but I see that it uses GuzzleHttp.
During the initialization it doesn't accept any configuration Request::initialize()
public static function initialize(Telegram $telegram)
{
if (!($telegram instanceof Telegram)) {
throw new TelegramException('Invalid Telegram pointer!');
}
self::$telegram = $telegram;
self::setClient(new Client(['base_uri' => self::$api_base_uri]));
}
you should check its documentation. I see that there are a lot of setters which makes you able to overwrite the default settings.
What you need is to set the the \GuzzleHttp\RequestOptions::VERIFY to false in the client config:
$this->client = new \GuzzleHttp\Client([
'base_uri' => 'someAccessPoint',
\GuzzleHttp\RequestOptions::HEADERS => [
'User-Agent' => 'some-special-agent',
],
'defaults' => [
\GuzzleHttp\RequestOptions::CONNECT_TIMEOUT => 5,
\GuzzleHttp\RequestOptions::ALLOW_REDIRECTS => true,
],
\GuzzleHttp\RequestOptions::VERIFY => false,
]);
For fix this problem copy this Url to browser and set webhook:
https://api.telegram.org/botTOKEN/setWebhook?url=https://yourwebsite.com
Solution 2 of The Error
Let’s follow these simple steps:
Download this bundle of root certificates: https://curl.haxx.se/ca/cacert.pem
Put in any location of your server.
Open php.ini and add this line:
curl.cainfo = "[the_location]\cacert.pem"
Restart your webserver.
That’s it. 🙂
I'm new to kibana.I am working with data migration from MySQL to elasticsearch.How can i do this? Is using jdbc input plugin is the only process??
Here is the logstash.conf file where i specified the input and output:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/kibana"
jdbc_user => "xxx"
jdbc_password => "xxxxx"
jdbc_driver_library => "/root/mysql-connector-java-5.1.30/mysql-connector-java-5.1.30-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * FROM datalog"
}
}
output {
elasticsearch {
"hosts" => "localhost:9200"
}
stdout { codec => rubydebug }
}
after running the above file by using ./logstash -f logstash.conf we are getting the below error:
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
I want to ship stdout from a running application to logz.io using logstash. Application and logstash are both docker images managed by docker-compose, which does the setup (pull images, network_mode, logging driver etc.). Logstash input is handled via gelf-input-plugin. The shipping to logz.io is handled via tcp-output-plugin.
logstash.conf:
input { gelf {
type => docker
port => 12201 } }
filter { mutate { add_field => { "token" => "${LOGZIOTOKEN}" } } }
output { tcp {
host => "listener.logz.io"
port => 5050
codec => json_lines } }
excerpt from docker-compose.yml:
application:
...
logging:
driver: "gelf"
options:
gelf-address: "udp://0.0.0.0:12201"
This works as expected.
Now there is a TCP proxy server I need to use, to ship the logs from the host (running the logstash instance) to logz.io. Unfortunately I did not find a proxy extension for logstashs tcp-output-plugin. Does anyone has a suggestion for this issue?
The logstash's http output plugin has the attribute proxy. You have to use the logz.io port for shipping with curl: 8070(http)/8071(https).
A working config looks like this:
output { http {
url => "https://listener.logz.io:8071?token=${LOGZIOTOKEN}"
http_method => "post"
format => "json"
content_type => "application/json"
proxy => {
host => "${PROXYHOST}"
port => "${PROXYPORT}"
scheme => 'http'
user => "${PROXYUSER}"
password => "${PROXYPW}"
}}}
You do not need the filter to ship to logz.io like in tcp-output-plugin config. Just add the input and ship it!
I'm trying to monitor a symfony app with the ELK stack.
I'm shipping my logs to logstash with the following configuration :
monolog:
handlers:
main:
type: gelf
publisher:
hostname: elk-host
port: 10514
formatter: monolog.formatter.gelf_message
level: INFO
On kibana, I see that I reiceive the logs but the message is encoded in a strange way; here is an example of what kibana displays :
x\x9CMP\xC1n\x830\f\xFD\u0015+\xA7V\xAAB\xA1\f(\xD7j;Nڴ\xDD\"Ui0`)\u0004D\(\x9A\xF6\xEF\v\x9B\xD6\xEDf\xBFg\xFB\xF9\xBD\u000F1\xE1\xE8\xA9w\xA2\u0014\xB1܋\x9Dh{ϡ\u0019\xFA\x915Y\xCF^\xDA\xDEh\e\u0018\xDF\u0006\xECܡ\xF7\xBA\xC10\xF2\x8A5\x8E\xE8\f\xB9\u0006\xB8EP\xC2\xF4#*\u0001xct\xEBQ\xB8,#\xEC\xC1\xE9\u000EaSaM\u000E\xAB\u0015l\x90\x9F\u0003\xB6\xD9n\x81
Here is my monolog configuration file :
input {
gelf {
codec => "json"
}
syslog {
port => 10514
type => "syslog"
}
}
filter {
}
output {
elasticsearch {}
}
I tried to add an encoding option (charset => "UTF-8") but it was not better.
Also why are my logs displayed as "syslog" type instead of "gelf" that I specified in monolog config ?
Your sending GELF (JSON) output to a SYSLOG listener, you need to change to send it to the GELF port rather than the SYSLOG port