I'm trying to monitor a symfony app with the ELK stack.
I'm shipping my logs to logstash with the following configuration :
monolog:
handlers:
main:
type: gelf
publisher:
hostname: elk-host
port: 10514
formatter: monolog.formatter.gelf_message
level: INFO
On kibana, I see that I reiceive the logs but the message is encoded in a strange way; here is an example of what kibana displays :
x\x9CMP\xC1n\x830\f\xFD\u0015+\xA7V\xAAB\xA1\f(\xD7j;Nڴ\xDD\"Ui0`)\u0004D\(\x9A\xF6\xEF\v\x9B\xD6\xEDf\xBFg\xFB\xF9\xBD\u000F1\xE1\xE8\xA9w\xA2\u0014\xB1܋\x9Dh{ϡ\u0019\xFA\x915Y\xCF^\xDA\xDEh\e\u0018\xDF\u0006\xECܡ\xF7\xBA\xC10\xF2\x8A5\x8E\xE8\f\xB9\u0006\xB8EP\xC2\xF4#*\u0001xct\xEBQ\xB8,#\xEC\xC1\xE9\u000EaSaM\u000E\xAB\u0015l\x90\x9F\u0003\xB6\xD9n\x81
Here is my monolog configuration file :
input {
gelf {
codec => "json"
}
syslog {
port => 10514
type => "syslog"
}
}
filter {
}
output {
elasticsearch {}
}
I tried to add an encoding option (charset => "UTF-8") but it was not better.
Also why are my logs displayed as "syslog" type instead of "gelf" that I specified in monolog config ?
Your sending GELF (JSON) output to a SYSLOG listener, you need to change to send it to the GELF port rather than the SYSLOG port
Related
I am sending logs to an azure eventhub with Serilog (using WriteTo.AzureEventHub(eventHubClient)), after that I am running a filebeat process with the azure module enabled, so I send these logs to elasticsearch to be able to explore them with Kibana.
The problem I have is that all the information goes to the field "message", I would need to separate the information of my logs in different fields to be able to do good queries.
The way I found was create an ingest pipeline in Kibana and through a grok processor I separate the fields inside the "meessage" and generate multiple fields. In the filebeat.yml I set the pipeline name, but nothing happen, it seems the pipeline is not working.
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
pipeline: "filebeat-otc"
Does anybody knows what I am missing? THANKS in advance.
EDITION. I will add an example of my pipeline and my data. In the simulation is working properly:
POST _ingest/pipeline/_simulate
{
"pipeline": {
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"%{TIME:timestamp}\\s%{LOGLEVEL}\\s{[a-zA-Z]*:%{UUID:CorrelationID},[a-zA-Z]*:%{TEXT:OperationTittle},[a-zA-Z]*:%{TEXT:OriginSystemName},[a-zA-Z]*:%{TEXT:TargetSystemName},[a-zA-Z]*:%{TEXT:OperationProcess},[a-zA-Z]*:%{TEXT:LogMessage},[a-zA-Z]*:%{TEXT:ErrorMessage}}"
],
"pattern_definitions": {
"LOGLEVEL" : "\\[[^\\]]*\\]",
"TEXT" : "[a-zA-Z0-9- ]*"
}
}
}
]
},
"docs": [
{
"_source": {
"message": "15:13:59 [INF] {CorrelationId:83355884-a351-4c8b-af8d-b77c48462f36,OperationTittle:Operation1,OriginSystemName:Fexa,TargetSystemName:Usina,OperationProcess:Testing Log Data,LogMessage:Esto es una buena prueba,ErrorMessage:null}"
}
},
{
"_source": {
"message": "20:13:48 [INF] {CorrelationId:8451ee54-efca-40be-91c8-8c8e18e33f58,OperationTittle:null,OriginSystemName:Fexa,TargetSystemName:Donna,OperationProcess:Testing Log Data,LogMessage:null,ErrorMessage:null}"
}
}
]
}
It seems when you use a module it will create and use an ingest pipeline in elasticsearch, and the pipeline option in the output is ignored.
So my solution was modify the index.final_pipeline. For this, in Kibana I went to Stack Management / Index Management there I found my index, there I went to Edit Settings and set "index.final_pipeline": "the-name-of-my-pipeline".
I hope this helps to anybody.
This was thanks to leandrojmp
I'm using Serilog with the Serilog.Formatting.Json.JsonFormatter formatter in a .NET Core app in GKE. I am logging to Console, which is read by a GKE Logging agent. The GKE logging agent expects a "severity" property at the top level of the Log Event: GCP Cloud Logging LogEntry docs
Because of this, all of my logs show up in GCP Logging with severity "Info", as the Serilog Level is found in the jsonPayload property of the LogEntry in GCP. Here is an example LogEntry as seen in Cloud Logging:
{
insertId: "1cu507tg3by7sr1"
jsonPayload: {
Properties: {
SpanId: "|a85df301-4585ee48ea1bc1d1."
ParentId: ""
ConnectionId: "0HM64G0TCF3RI"
RequestPath: "/health/live"
RequestId: "0HM64G0TCF3RI:00000001"
TraceId: "a85df301-4585ee48ea1bc1d1"
SourceContext: "CorrelationId.CorrelationIdMiddleware"
EventId: {2}
}
Level: "Information"
Timestamp: "2021-02-03T17:40:28.9343987+00:00"
MessageTemplate: "No correlation ID was found in the request headers"
}
resource: {2}
timestamp: "2021-02-03T17:40:28.934566174Z"
severity: "INFO"
labels: {3}
logName: "projects/ah-cxp-common-gke-np-946/logs/stdout"
receiveTimestamp: "2021-02-03T17:40:32.020942737Z"
}
My first thought was to add a "Severity" property using an Enricher:
class SeverityEnricher : ILogEventEnricher
{
public void Enrich(LogEvent logEvent, ILogEventPropertyFactory propertyFactory)
{
logEvent.AddOrUpdateProperty(
propertyFactory.CreateProperty("Severity", LogEventLevel.Error));
}
}
The generated log looks like this in GCP, and is still tagged as Info:
{
insertId: "wqxvyhg43lbwf2"
jsonPayload: {
MessageTemplate: "test error!"
Level: "Error"
Properties: {
severity: "Error"
}
Timestamp: "2021-02-03T18:25:32.6238842+00:00"
}
resource: {2}
timestamp: "2021-02-03T18:25:32.623981268Z"
severity: "INFO"
labels: {3}
logName: "projects/ah-cxp-common-gke-np-946/logs/stdout"
receiveTimestamp: "2021-02-03T18:25:41.029632785Z"
}
Is there any way in Serilog to add the "severity" property at the same level as "jsonPayload" instead of inside it? I suspect GCP would then pick it up and log the error type appropriately.
As a last resort I could probably use a GCP Logging sink, but my current setup is much more convenient and performant with the GKE Logging Agent already existing.
Here's a relevant Stack Overflow post with no information or advice past what I already have, which is not enough to solve this: https://stackoverflow.com/questions/57215700
I found the following information detailing the severity of each SeriLog to Stackdriver log level, the next table might also help you
Serilog
Stackdriver
Verbose
Debug
Debug
Debug
Information
Info
Warning
Warning
Error
Error
Fatal
Critical
The complete information can be found at the following link
https://github.com/manigandham/serilog-sinks-googlecloudlogging#log-level-mapping
I think this code could help you to make Stackdriver recognize the severity of the logs given by SeriLogs.
private static LogSeverity TranslateSeverity(LogEventLevel level) => level switch
{
LogEventLevel.Verbose => LogSeverity.Debug,
LogEventLevel.Debug => LogSeverity.Debug,
LogEventLevel.Information => LogSeverity.Info,
LogEventLevel.Warning => LogSeverity.Warning,
LogEventLevel.Error => LogSeverity.Error,
LogEventLevel.Fatal => LogSeverity.Critical,
_ => LogSeverity.Default
};
I will leave the link to the complete code here
https://github.com/manigandham/serilog-sinks-googlecloudlogging/blob/master/src/Serilog.Sinks.GoogleCloudLogging/GoogleCloudLoggingSink.cs#L251
Greetings!
I have a working ELK stack connected to Redis.
I also have a working stateless Symfony 4 application and I want to send all the production logs to my Redis.
I know Monolog has a Redis handler, but I don't know how I'm supposed to tweak the config/prod/monolog.yaml file to accomplish this of if there’s another approach.
This is how it looks right now:
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
excluded_http_codes: [404]
nested:
type: stream
path: "php://stderr"
level: debug
console:
type: console
process_psr_3_messages: false
channels: ["!event", "!doctrine"]
deprecation:
type: stream
path: "php://stderr"
deprecation_filter:
type: filter
handler: deprecation
max_level: info
channels: ["php"]
The approach I took was, first installing the predis client:
composer require predis/predis
Then create a custom service class that extends the RedisHandler class that comes with the Monolog package:
namespace App\Service\Monolog\Handler;
use Monolog\Handler\RedisHandler;
use Monolog\Logger;
use Predis\Client as PredisClient;
class Redis extends RedisHandler
{
public function __construct( $host, $port = 6379, $level = Logger::DEBUG, $bubble = true, $capSize = false)
{
$predis = new PredisClient( "tcp://$host:$port" );
$key = 'logstash';
parent::__construct($predis, $key, $level, $bubble, $capSize);
}
}
Next, activate the service we just created on the services.yml config file:
services:
monolog.handler.redis:
class: App\Service\Monolog\Handler\Redis
arguments: [ '%redis.host%' ]
Be sure the parameter redis.host is set and points to your Redis server. In my case, my parameter value is the IP of my Redis server.
I added other parameters to the class like port and log level. You can set it at the moment of instantiating your service like with the host parameter.
Finally, configure your custom log handler service in your monolog.yaml config file. In my case, I need it only the production logs with the config as follow:
handlers:
custom:
type: service
id: monolog.handler.redis
level: debug
channels: ['!event']
I want to ship stdout from a running application to logz.io using logstash. Application and logstash are both docker images managed by docker-compose, which does the setup (pull images, network_mode, logging driver etc.). Logstash input is handled via gelf-input-plugin. The shipping to logz.io is handled via tcp-output-plugin.
logstash.conf:
input { gelf {
type => docker
port => 12201 } }
filter { mutate { add_field => { "token" => "${LOGZIOTOKEN}" } } }
output { tcp {
host => "listener.logz.io"
port => 5050
codec => json_lines } }
excerpt from docker-compose.yml:
application:
...
logging:
driver: "gelf"
options:
gelf-address: "udp://0.0.0.0:12201"
This works as expected.
Now there is a TCP proxy server I need to use, to ship the logs from the host (running the logstash instance) to logz.io. Unfortunately I did not find a proxy extension for logstashs tcp-output-plugin. Does anyone has a suggestion for this issue?
The logstash's http output plugin has the attribute proxy. You have to use the logz.io port for shipping with curl: 8070(http)/8071(https).
A working config looks like this:
output { http {
url => "https://listener.logz.io:8071?token=${LOGZIOTOKEN}"
http_method => "post"
format => "json"
content_type => "application/json"
proxy => {
host => "${PROXYHOST}"
port => "${PROXYPORT}"
scheme => 'http'
user => "${PROXYUSER}"
password => "${PROXYPW}"
}}}
You do not need the filter to ship to logz.io like in tcp-output-plugin config. Just add the input and ship it!
I'm trying to send my symfony2 logs to a logstash server but the server doesn't receive the logs =/
My logstash conf in /etc/logstash/conf.d/logstash.conf is
input {
gelf {
port => "12201"
host => "0.0.0.0"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout {
debug => true
debug_format => "json"
}
}
The files in /var/log/logstash are empty. With monolog, I caught the sended strings :
{"_facility":"request",
"_ctxt_route_parameters":"...",
"_ctxt_request_uri":"...",
"version":"1.0",
"short_message":"...",
"full_message":null,
"host":"dev",
"timestamp":1462196712,
"level":6}
What is wrong with my config ? I tested to get elasticsearch indexes :
curl 'localhost:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana 1 1 1 0 3.1kb 3.1kb
Thanks for helping