What is the correct/easiest way to redirect output (out/err) from .execute() or .parseArgs() to a logger (org.slf4j.Logger)?
(Production processes are often executed by a scheduler with output to application-specific log files. And stdout/err, if not redirected, gets dumped in the scheduler/server log - which is not appropriate. Hence this question.)
I have something like this:
Logger logger = LoggerFactory.getLogger(MyApp.class);
// ...
new CommandLine(new MyApp())
.setOut(new LoggerWriter(logger, Level.INFO)
.setErr(new LoggerWriter(logger, Level.ERROR)
.execute(args);
// ...
where the LoggerWriter class is inspired from here.
Related
In summary I want to send system information to my HTTP server when the "log.Fatal()" is called without any extra code for every log statement. Changing/overwriting the default behaviour of Info, Fatal etc. would be fantastic.
In Python, there is a way to add HTTP handlers to default logging library which in turn sends a POST HTTP request on log emit.
You can create a wrapper module for builtin log
yourproject/log/log.go
package log
import goLog "log"
func Fatal(v ...interface{}) {
goLog.Fatal(v...)
// send request ...
// reqQueue <- some args
}
replace log module with the wrapper in your project
// import "log"
import "yourproject/log"
func Foo() {
log.Fatal(err)
}
Try creating a type that wraps the standard Logger type, but with your desired enhancement. Then by creating an instance of it called "log" which wraps the default logger, you can continue to use logging in your code in the same way with minimal changes required (since it will have the same name as the log package, and retain *all of the methods).
package main
import _log "log"
type WrappedLogger struct {
// This field has no name, so we retain all the Logger methods
*_log.Logger
}
// here we override the behaviour of log.Fatal
func (l *WrappedLogger) Fatal(v ...interface{}) {
l.Println("doing the HTTP request")
/// do HTTP request
// now call the original Fatal method from the underlying logger
l.Logger.Fatal(v...)
}
// wrapping the default logger, but adding our new method.
var log = WrappedLogger{_log.Default()}
func main() {
// notice we can still use Println
log.Println("hello")
// but now Fatal does the special behaviour
log.Fatal("fatal log")
}
*The only gotcha here is that we've replaced the typical log package with a log instance. In many ways, it behaves the same, since most of the functions in the log package are set up as forwards to the default Logger instance for convenience.
However, this means that our new log won't have access to the "true" functions from the log package, such as log.New. For that, you will need to reference the alias to the original package.
// want to create a new logger?
_log.New(out, prefix, flag)
I have an ASP.NET 4.7 web project where there is a Quartz.NET scheduler implemented.
I've read that Quartz.NET is using Common.Logging abstraction, but I don't know what it really means...
In order to avoid my default application log to be spammed from Quartz messages, I have configured programmatically the NLog settings, in the following way:
var config = new NLog.Config.LoggingConfiguration();
var logfile = new NLog.Targets.FileTarget("logfile")
{
FileName = "${basedir}/Logs/${logger}_${shortdate}.log",
Layout = "${longdate}|${level:uppercase=true}|${aspnet-request-ip}|${aspnet-request-url}|${callsite}|${message}|${exception:format=tostring}"
};
var logfileQ = new NLog.Targets.FileTarget("logfile")
{
FileName = "${basedir}/Logs/Quartz_${shortdate}.log",
Layout = "${longdate}|${level:uppercase=true}||${message}"
};
config.AddTarget(logfile);
config.AddTarget(logfileQ);
config.AddRule(LogLevel.Error, LogLevel.Fatal, logfileQ, "Quartz*", true);
config.AddRule(LogLevel.Trace, LogLevel.Fatal, logfile, "*");
// Apply config
NLog.LogManager.Configuration = config;
NLog.LogManager.ReconfigExistingLoggers();
Then I add my application logs with the following code:
public class MyApiController : ApiController
{
private static NLog.Logger logger = NLog.LogManager.GetLogger("Application");
[HttpPost]
[Authorize]
public IHttpActionResult Post(DataModel.MyModel m)
{
logger.Warn("Unable to add point {0}: localization missing", m.Name);
}
}
So NLog creates an "application_yyyy-MM-dd.log" file correctly and also a "Quartz_yyyy-MM-dd.log" file with only the error and fatal message levels.
The problem is that it creates also the following three files for Quartz containing all levels:
Quartz.Core.JobRunShell_2020-04-28.log
Quartz.Core.QuartzSchedulerThread_2020-04-28.log
Quartz.Simpl.SimpleJobFactory_2020-04-28.log
It seems like final=true of the first rule is ignored.
Which is the right way to configure it? Should I have to disable something in Quartz?
I finally figured it out.
The added rules must be seen as filters, what doesn't match a filter go to the next rule.
The last one is like "take everything that has not been matching before..."
The main issue in mine rules is the first one that match only the levels Error and Fatal, but all the other levels of Quartz message step into the next rule that writes the log file.
Therefore the rules should be like this:
config.AddRule(LogLevel.Trace, LogLevel.Fatal, logfileQ, "Quartz*", true);
config.AddRule(LogLevel.Trace, LogLevel.Fatal, logfile, "*");
In this way, all the messages, of any levels, coming from Quartz will be written in the quartz_ log file.
To avoid a trace or info level of Quartz to be recorded I should add a third rule to grab also them and placed before the "grab all" rule:
config.AddRule(LogLevel.Warn, LogLevel.Fatal, logfileQ, "Quartz*", true);
config.AddRule(LogLevel.Trace, LogLevel.Info, noLogging, "Quartz*", true);
config.AddRule(LogLevel.Trace, LogLevel.Fatal, logfile, "*");
Where noLogging is a target that doesn't write anywhere or only to console
I'm trying to understand when should i use org.springframework.retry.RecoveryCallback and org.springframework.kafka.listener.KafkaListenerErrorHandler?
As of today, I'm using a class (implements org.springframework.retry.RecoveryCallback) to log error message and send the message to DLT and it's working. For sending a message to DLT, I'm using Spring KafkaTemplate and then I came across KafkaListenerErrorHandler and DeadLetterPublishingRecoverer. Now, can you please suggest me, how should i use KafkaListenerErrorHandler and DeadLetterPublishingRecoverer? Can this replace the RecoveryCallback?
Here is my current kafkaListenerContainerFactory code
#Bean
public ConcurrentKafkaListenerContainerFactory kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(primaryConsumerFactory());
factory.setRetryTemplate(retryTemplate());
factory.setRecoveryCallback(recoveryCallback);
factory.getContainerProperties().setAckMode(AckMode.RECORD);
factory.setConcurrency(1);
factory.getContainerProperties().setMissingTopicsFatal(false);
return factory; }
If it's working as you want now, why change it?
There are several layers and you can choose which one to do the error handling, depending on your needs.
KafkaListenerErrorHandler would be invoked for each delivery attempt within the retry, so you typically won't use it with retry.
Retry RecoveryCallback is invoked after retries are exhausted (or immmediately if you have classified an exception as not retryable).
ErrorHandler - is in the container and is invoked if any listener throws an exception, not just #KafkaListeners.
With recent versions of the framework you can completely replace listener level retry with a SeekToCurrentErrorHandler configured with a DeadLetterPublishingRecoverer and a BackOff.
The DeadLetterPublishingRecoverer is intended for use in a container error handler since it needs the raw ConsumerRecord<?, ?>.
The KafkaListenerErrorHandler only has access to the spring-messaging Message<?> that is converted from the ConsumerRecord<?, ?>.
To add on to the excellent context from #GaryRussell, this is what i am currently using:
I am handling any errors(a.k.a exception) like this:
factory.setErrorHandler(new SeekToCurrentErrorHandler(
new DeadLetterPublishingRecoverer(kafkaTemplate), new FixedBackOff(0L, 0L)));
And to print this error, i have a listener on the .DLT and i am printing the exception stack trace that is stored in the header like so:
#KafkaListener(id = "MY_ID", topics = MY_TOPIC + ".DLT")
public void listenDlt(ConsumerRecord<String, SomeClassName> consumerRecord,
#Header(KafkaHeaders.DLT_EXCEPTION_STACKTRACE) String exceptionStackTrace) {
logger.error(exceptionStackTrace);
}
Note: I am using logger.error, because i am redirecting all error messages to an error log file that is being monitored.
BONUS:
If you set the following:
logging.level.org.springframework.kafka=DEBUG
You will see this in your console/log:
xxx [org.springframework.kafka.KafkaListenerEndpointContainer#7-2-C-1] DEBUG o.s.k.listener.SeekToCurrentErrorHandler - Skipping seek of: ConsumerRecord xxx
xxx [kafka-producer-network-thread | producer-3] DEBUG o.s.k.l.DeadLetterPublishingRecoverer - Successful dead-letter publication: SendResult xxx
If you have a better way to log, i would appreciate your comment.
Thanks!
Cheers
I'm using Symony 3.3 and Monolog as application logger.
All services are using the injected logger. For example:
public function __construct(LoggerInterface $logger)
{
$this->logger = $logger;
}
public function work() {
$this->logger->info("Some info");
$this->logger->debug("Some debug");
}
It happens that I use these services from both controllers and Symfony commands. What I'd like is to handle the logs differently if they are executed from commands.
For example: I have some commands whose purpose is to process a record and the business requires that I store the processing history (logs) in the database for each job.
For example, the command
(server)$ bin/console process:record 12345
should save the log content into record's table, inside "processing_logs" field.
The question is: how to I buffer and extract the list of logs? Ideally, without changing the services and without changing the controllers.
In my Ruby (not Rails) program, I have created global variables in the top-level module. These global variables are set as the clients of external services, so my program makes API calls when they are set. I am trying to figure out how to properly stub these API calls in RSpec.
I would like to test a class inside the top module, that looks more or less like this. Worker does not directly call the global variables anywhere in the class.
module TopModule
class Worker
end
end
Here is the TopModule:
module TopModule
# (As an aside, the external service is AWS)
$client = ExternalService::Client.new(ExternalService.config)
end
I would like to run the RSpec test of TopModule::Worker so it passes:
describe TopModule::Worker do
it 'shows in various ways that Worker functions'
end
However, I get the following error: Real HTTP connections are disabled. Unregistered request: GET http://... with headers {...} (WebMock::NetConnectNotAllowedError)
The stack trace points to the line in TopModule where $client is defined.
I'm also told:
You can stub this request with the following snippet:
stub_request(:get, "http://...").
with(:headers => {'Accept'=>'*/*', 'Accept-Encoding'=>'...', 'User-Agent'=>'Ruby'}).
to_return(:status => 200, :body => "", :headers => {})
I still have the error when I add the stub to my spec/spec_helper RSpec.configure loop. Here are the relevant parts of the the spec_helper:
require 'webmock/rspec'
require 'codeclimate-test-reporter'
WebMock.disable_net_connect!(allow: 'codeclimate.com')
require 'fileutils'
require 'top_module'
Dir['./spec/support/**/*.rb'].sort.each { |f| require f }
RSpec.configure do |config|
config.mock_with :rspec do |mocks|
mocks.verify_doubled_constant_names = true
mocks.verify_partial_doubles = true
end
end
def files_directory
File.dirname(__FILE__) + '/files'
end
Where can I put the stub so it will actually handle the ExternalService API call? I would appreciate your help.
(This code is based on my real code, but not identical)
you can use VCR to stub external api call.
https://github.com/vcr/vcr