Serilog- Add "severity" property to top level of LogEvent for GKE? - .net-core

I'm using Serilog with the Serilog.Formatting.Json.JsonFormatter formatter in a .NET Core app in GKE. I am logging to Console, which is read by a GKE Logging agent. The GKE logging agent expects a "severity" property at the top level of the Log Event: GCP Cloud Logging LogEntry docs
Because of this, all of my logs show up in GCP Logging with severity "Info", as the Serilog Level is found in the jsonPayload property of the LogEntry in GCP. Here is an example LogEntry as seen in Cloud Logging:
{
insertId: "1cu507tg3by7sr1"
jsonPayload: {
Properties: {
SpanId: "|a85df301-4585ee48ea1bc1d1."
ParentId: ""
ConnectionId: "0HM64G0TCF3RI"
RequestPath: "/health/live"
RequestId: "0HM64G0TCF3RI:00000001"
TraceId: "a85df301-4585ee48ea1bc1d1"
SourceContext: "CorrelationId.CorrelationIdMiddleware"
EventId: {2}
}
Level: "Information"
Timestamp: "2021-02-03T17:40:28.9343987+00:00"
MessageTemplate: "No correlation ID was found in the request headers"
}
resource: {2}
timestamp: "2021-02-03T17:40:28.934566174Z"
severity: "INFO"
labels: {3}
logName: "projects/ah-cxp-common-gke-np-946/logs/stdout"
receiveTimestamp: "2021-02-03T17:40:32.020942737Z"
}
My first thought was to add a "Severity" property using an Enricher:
class SeverityEnricher : ILogEventEnricher
{
public void Enrich(LogEvent logEvent, ILogEventPropertyFactory propertyFactory)
{
logEvent.AddOrUpdateProperty(
propertyFactory.CreateProperty("Severity", LogEventLevel.Error));
}
}
The generated log looks like this in GCP, and is still tagged as Info:
{
insertId: "wqxvyhg43lbwf2"
jsonPayload: {
MessageTemplate: "test error!"
Level: "Error"
Properties: {
severity: "Error"
}
Timestamp: "2021-02-03T18:25:32.6238842+00:00"
}
resource: {2}
timestamp: "2021-02-03T18:25:32.623981268Z"
severity: "INFO"
labels: {3}
logName: "projects/ah-cxp-common-gke-np-946/logs/stdout"
receiveTimestamp: "2021-02-03T18:25:41.029632785Z"
}
Is there any way in Serilog to add the "severity" property at the same level as "jsonPayload" instead of inside it? I suspect GCP would then pick it up and log the error type appropriately.
As a last resort I could probably use a GCP Logging sink, but my current setup is much more convenient and performant with the GKE Logging Agent already existing.
Here's a relevant Stack Overflow post with no information or advice past what I already have, which is not enough to solve this: https://stackoverflow.com/questions/57215700

I found the following information detailing the severity of each SeriLog to Stackdriver log level, the next table might also help you
Serilog
Stackdriver
Verbose
Debug
Debug
Debug
Information
Info
Warning
Warning
Error
Error
Fatal
Critical
The complete information can be found at the following link
https://github.com/manigandham/serilog-sinks-googlecloudlogging#log-level-mapping
I think this code could help you to make Stackdriver recognize the severity of the logs given by SeriLogs.
private static LogSeverity TranslateSeverity(LogEventLevel level) => level switch
{
LogEventLevel.Verbose => LogSeverity.Debug,
LogEventLevel.Debug => LogSeverity.Debug,
LogEventLevel.Information => LogSeverity.Info,
LogEventLevel.Warning => LogSeverity.Warning,
LogEventLevel.Error => LogSeverity.Error,
LogEventLevel.Fatal => LogSeverity.Critical,
_ => LogSeverity.Default
};
I will leave the link to the complete code here
https://github.com/manigandham/serilog-sinks-googlecloudlogging/blob/master/src/Serilog.Sinks.GoogleCloudLogging/GoogleCloudLoggingSink.cs#L251
Greetings!

Related

Ingest pipeline is not working over logs obtained from an event hub wih filebeat

I am sending logs to an azure eventhub with Serilog (using WriteTo.AzureEventHub(eventHubClient)), after that I am running a filebeat process with the azure module enabled, so I send these logs to elasticsearch to be able to explore them with Kibana.
The problem I have is that all the information goes to the field "message", I would need to separate the information of my logs in different fields to be able to do good queries.
The way I found was create an ingest pipeline in Kibana and through a grok processor I separate the fields inside the "meessage" and generate multiple fields. In the filebeat.yml I set the pipeline name, but nothing happen, it seems the pipeline is not working.
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
pipeline: "filebeat-otc"
Does anybody knows what I am missing? THANKS in advance.
EDITION. I will add an example of my pipeline and my data. In the simulation is working properly:
POST _ingest/pipeline/_simulate
{
"pipeline": {
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"%{TIME:timestamp}\\s%{LOGLEVEL}\\s{[a-zA-Z]*:%{UUID:CorrelationID},[a-zA-Z]*:%{TEXT:OperationTittle},[a-zA-Z]*:%{TEXT:OriginSystemName},[a-zA-Z]*:%{TEXT:TargetSystemName},[a-zA-Z]*:%{TEXT:OperationProcess},[a-zA-Z]*:%{TEXT:LogMessage},[a-zA-Z]*:%{TEXT:ErrorMessage}}"
],
"pattern_definitions": {
"LOGLEVEL" : "\\[[^\\]]*\\]",
"TEXT" : "[a-zA-Z0-9- ]*"
}
}
}
]
},
"docs": [
{
"_source": {
"message": "15:13:59 [INF] {CorrelationId:83355884-a351-4c8b-af8d-b77c48462f36,OperationTittle:Operation1,OriginSystemName:Fexa,TargetSystemName:Usina,OperationProcess:Testing Log Data,LogMessage:Esto es una buena prueba,ErrorMessage:null}"
}
},
{
"_source": {
"message": "20:13:48 [INF] {CorrelationId:8451ee54-efca-40be-91c8-8c8e18e33f58,OperationTittle:null,OriginSystemName:Fexa,TargetSystemName:Donna,OperationProcess:Testing Log Data,LogMessage:null,ErrorMessage:null}"
}
}
]
}
It seems when you use a module it will create and use an ingest pipeline in elasticsearch, and the pipeline option in the output is ignored.
So my solution was modify the index.final_pipeline. For this, in Kibana I went to Stack Management / Index Management there I found my index, there I went to Edit Settings and set "index.final_pipeline": "the-name-of-my-pipeline".
I hope this helps to anybody.
This was thanks to leandrojmp

Error: connect ECONNREFUSED 127.0.0.1:8080

I am using wordpress website Local by flywheel ( url: xyz.local ) . I created a new gatsby site using and added gatsby-source-woocommerce. I also generated consumer key and consumer secret from woo-commerce settings. i added them to the api_keys in the config file.
When i run gastby develop, i get this error.
========== WARNING FOR FIELD products ===========
The following error status was produced: Error: connect ECONNREFUSED 127.0.0.1:8080
================== END WARNING ==================
08:19:23.204Z > gatsby-source-woocommerce: Fetching 0 nodes for field: products
08:19:23.206Z > gatsby-source-woocommerce: Completed fetching nodes for field: products
warn
========== WARNING FOR FIELD products/categories ===========
The following error status was produced: Error: connect ECONNREFUSED 127.0.0.1:8080
================== END WARNING ==================
08:19:23.213Z > gatsby-source-woocommerce: Fetching 0 nodes for field: products/categories
08:19:23.215Z > gatsby-source-woocommerce: Completed fetching nodes for field: products/categories
warn
========== WARNING FOR FIELD products/attributes ===========
The following error status was produced: Error: connect ECONNREFUSED 127.0.0.1:8080
================== END WARNING ==================
Can someone pls say if did i miss anything? or any wrong i have done?
I solved it. Problem is with plugin.
In config options of gatsby-source-woocommerce,
comment everything after fields i.e After commenting it looks like,
{
resolve: "#pasdo501/gatsby-source-woocommerce",
options: {
// Base URL of Wordpress site
api: "wordpress.domain",
// set to false to not see verbose output during build
// default: true
verbose: true,
// true if using https. otherwise false.
https: false,
api_keys: {
consumer_key: <key>,
consumer_secret: <secret>,
},
// Array of strings with fields you'd like to create nodes for...
fields: ["products", "products/categories", "products/attributes"],
},
},
Head to the #pasdo501/gatsby-source-woocommerce folder ( node modules ) -> gatsby-node.js
change api_version = "wc/v3" to "wc/v2" and
change wpAPIPrefix = null to "wp-json"
and save it.
voila
no need to change the package. you can do this:
add /index.php to the end of api.
set wpAPIPrefix to wp-json.
set query_string_auth to true (I,m not sure if this one necessary).
{
resolve: '#pasdo501/gatsby-source-woocommerce',
options: {
api: 'pro.com/index.php',
https: true,
verbose: true,
api_keys: {
consumer_key: `ck_...........`,
consumer_secret: `cs_.................`,
},
fields: ['products', 'products/categories', 'products/attributes', 'products/tags'],
wpAPIPrefix: 'wp-json',
query_string_auth: true,
api_version: 'wc/v3',
// per_page: 100,
// encoding: 'utf8',
// axios_config: {}
}
}

INVALID_ARGUMENT (400 error) when calling Stackdriver Error Reporting API

When trying to invoke the Stackdriver Error Reporting API (via the API explorer or via the Client-Side JavaScript library), I receive the following error:
Request:
{ "message" : "test" }
Response:
{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT"
}
}
The Stackdriver Error Reporting API is enabled and I have Owner rights to the App Engine project.
Is the API simply not functional? If I'm doing something wrong, can someone try to help?
The documentation for reporting events says that a ServiceContext is required.
If you're only sending a message (not a stacktrace / exception) you'll need to include a context with a reportLocation as well. This is noted in the documentation of the message field, but it's not obvious.
The following works from the API explorer:
{
"context": {
"reportLocation": {
"functionName": "My Function"
}
},
"message": "error message",
"serviceContext": {
"service": "My Microservice",
}
}
You might be interested in the docs on How Error are Grouped too.
FWIW, I work on this product and I think the error message is too generic. The problem is (?) that the serving stack scrubs the message unless they're annotated as being for public consumption. I'll chase that down.

Ionic 2 Google analytics integration

I would like to use GA for my Ionic 2 app but for hybrid applications it looks a little bit tricky.
In my app.component I have:
this.platform.ready().then(() => {
// Okay, so the platform is ready and our plugins are available.
// Here you can do any higher level native things you might need.
GoogleAnalytics.debugMode();
GoogleAnalytics.startTrackerWithId(this.config.googleAnalyticsId).then(() => {
console.log(this.TAG + ' ANALYTICS+');
}).catch((err) => {
console.log(this.TAG + ' ANALYTICS- ' + err);
});
GoogleAnalytics.enableUncaughtExceptionReporting(true)
.then((_success) => {
console.log(_success);
}).catch((_error) => {
console.log(_error);
});
...
And got such error log:
I: Google Analytics 10.2.98 is starting up. To enable debug logging on a device run:
adb shell setprop log.tag.GAv4 DEBUG
adb logcat -s GAv4
I: Logger is deprecated. To enable debug logging, please run:
adb shell setprop log.tag.GAv4 DEBUG
W: THREAD WARNING: exec() call to UniversalAnalytics.debugMode blocked the main thread for 57ms. Plugin should use CordovaInterface.getThreadPool().
W: Attempted to send a second callback for ID: UniversalAnalytics549833873
Result was: "Invalid action"
W: Attempted to send a second callback for ID: UniversalAnalytics549833875
Result was: "Invalid action"
I: [INFO:CONSOLE(16)] "Tracker not started", source: file:///android_asset/www/build/main.js (16)
The straightforward goal for me is to collect information about bugs during beta-testing. Because I didn't find enough information about this quite new subject for me, any hints or best practices would be appreciated.

Catch swiftmailer exception in Symfony2 dev env controller

Im not sure why Im not catching exceptions from Swiftmailer in my controller. What am I doing wrong, or missing?
In a controller I have:
try {
$this->get('mailer')->send($email);
}
catch (\Swift_TransportException $e) {
$result = array(
false,
'There was a problem sending email: ' . $e->getMessage()
);
}
It seems to get caught by Symfony before it gets to my code, so instead of being able to handle the error myself I get the standard 500 page with
Swift_TransportException: Connection could not be established
If the email can't be sent there is no need for the application to halt as the email isn't critical - I just want to issue a notice.
Maybe there's a way to disable Symfonys catching of certain exceptions or for certain Controllers?
When you do $this->container->get("mailer")->send($email); the email message is not being sent at that point if you have spooling turned on. See http://symfony.com/doc/current/cookbook/email/spool.html
If you have the default setting of spool: { type: memory }, the \Swift_TransportException will be thrown during the kernel termination phase, after your controller has exited.
One way around this is to turn off the spooling (but then your users might have to wait while the email is sent), or you can make your own eventlistener to handle the exception. http://symfony.com/doc/current/cookbook/service_container/event_listener.html
You can try overriding the Twig Exception Handler in config.yml:
twig:
debug: %kernel.debug%
strict_variables: %kernel.debug%
exception_controller: MyBundleName:Exception:show
You then create an Exception class which extends:
Symfony\Bundle\TwigBundle\Controller\ExceptionController
Read the source code of that file and then override the methods to switch which template is rendered when the Exception type is Swift_TransportException
You can do that by setting a class variable in showAction() and passing it to findTemplate()
showAction:
$this->exceptionClassName = $exception->getClass();
findTemplate:
if (!$debug && $this->exceptionClassName == 'MyBundle\Exception\GenericNotFoundException') {
return 'BundleName:Exception:generic404.html.twig';
}
For more information, I recommend the KNPUniversity Symfony Screencasts.

Resources