Is it possible to fire CDI events from within a interceptor ? (Using Jboss 7.1.1)
For example, if I have an interceptor PerformanceLogInterceptor
#Interceptors({PerformanceLogInterceptor.class})
public class ProcessHandler extends HandlerBase {
.
.
.
Could it fire an event as such:
public class PerformanceLogInterceptor {
private Logger LOG = LoggerFactory.getLogger("PerformanceLog");
#EJB
PerformanceMonitor performanceMonitor;
#Inject
Event<ExceptionEvent> exceptionEvent;
#AroundInvoke
#AroundTimeout
public Object performanceLog( InvocationContext invocationContext ) throws Exception {
String methodName = invocationContext.getMethod().toString();
long start = System.currentTimeMillis();
try {
return invocationContext.proceed();
} catch( Exception e ) {
LOG.warn( "During invocation of: {} exception occured: {}", methodName, Throwables.getRootCause(e).getMessage() );
performanceMonitor.addException( methodName, e );
Exception toSend;
if(e instanceof EfsobExceptionInformation ){
toSend = e;
} else {
LOG.debug("Wrapping exception");
EfsobExceptionWrapper wrapped = new EfsobExceptionWrapper(e);
toSend = wrapped;
}
if(exceptionEvent != null) {
LOG.debug("sending exceptionEvent");
exceptionEvent.fire(new ExceptionEventBuilder()
.setExceptionName(toSend)
.setEfsobExceptionType(toSend)
.setId(toSend)
.setStacktrace(toSend)
.build()
);
} else {
LOG.debug("exceptionEvent was null");
}
LOG.debug("rethrowing");
throw toSend;
} finally {
long total = System.currentTimeMillis() - start;
performanceMonitor.addPerformanceMetrics(methodName, total);
}
}
}
Note: exceptionEvent is null at runtime in the Above.
I moved it into an Async block of the PerformanceMonitor bean referenced above.... and then it works (WAT?)
#Singleton
#ConcurrencyManagement(ConcurrencyManagementType.BEAN)
public class PerformanceMonitor {
#Inject
Event<ExceptionEvent> exceptionEvent;
private Logger LOG = LoggerFactory.getLogger("PerformanceMonitor");
#Asynchronous
public void addException(String methodName, Exception e) {
if(exceptionEvent != null) {
LOG.debug("sending exceptionEvent");
exceptionEvent.fire(new ExceptionEventBuilder()
.setExceptionName(e)
.setEfsobExceptionType(e)
.setId(e)
.setStacktrace(e)
.build()
);
} else {
LOG.debug("exceptionEvent was null");
}
}
}
Related
#Test
public void testType() throws InterruptedException {
Integer num = 15;
String name = "Sahil";
Float percentage = 96.7f;
DOB dob = DOB.newBuilder().setDay(20).setMonth(8).setYear(2022).build();
ArrayList<Object> objects = new ArrayList<>(Arrays.asList(num,name,percentage,dob));
TypeRequest.Builder builder = TypeRequest.newBuilder();
StreamObserver<TypeResponse> typeResponseStreamObserver = new StreamObserver<TypeResponse>() {
#Override
public void onNext(TypeResponse typeResponse) {
System.out.println(
"Type : " + typeResponse.getType()
);
}
#Override
public void onError(Throwable throwable) {
System.out.println("Error : "+throwable);
}
#Override
public void onCompleted() {
System.out.println("Finished all requests");
}
};
StreamObserver<TypeRequest> typeRequestStreamObserver = this.calculatorServiceStub.getType(typeResponseStreamObserver);
for(Object obj : objects){
if (obj instanceof Integer){
builder.setNum((Integer) obj);
typeRequestStreamObserver.onNext(builder.build());
} else if (obj instanceof String) {
builder.setName((String) obj);
typeRequestStreamObserver.onNext(builder.build());
} else if (obj instanceof Float) {
builder.setFNum((Float) obj);
typeRequestStreamObserver.onNext(builder.build());
} else if (obj instanceof DOB) {
builder.setDob((DOB) obj);
typeRequestStreamObserver.onNext(builder.build());
}
// --------------------------------------------
Thread.sleep(500);
// --------------------------------------------
}
typeRequestStreamObserver.onNext(builder.clearType().build());
typeRequestStreamObserver.onCompleted();
}
If I did not add any delay then the output console is just blank. Testing with tools like BloomRPC and Postman it works fine,
but for this I don't know why this is happening?
Any little help will be very helpful. I appreciate it.
I have a .Net core console application, that uses Confluent.Kafka.
I build a consumer for consuming messages from specific topic.
the app is intended to run a few times every-day, consume the messages on the specified topic and process them.
It took me a while to understand the consumer's vehavior, but the it will consume messages only if its groupId is a one that was never in use before.
Every time I change the consumer's groupId - the comsumer will fetch the messages in the subscribed topic. But on the next runs - with same groupId - the consumer.Consume returns null.
This behvior seems rlated to rebalance between consumers on same group. But I don't understand why - since the consumer should exist only throughout the application liftime. Before leaving the app, I call to consumer.close() and consumer.Dispose(). These should destoy the consumer, so that on the next run, when I create the consumer, again it will be the first and single consumer on the specified groupId. But as I said, this is not what happens in fact.
I know there are messages on the topic - I check it via command-line. And I also made sure the topic has only 1 partition.
The most weird thing is, that I have another .net core console app, which does the same process - and with no issue at all.
I attach the codes of the 2 apps.
Working app - always consuming:
class Program
{
...
static void Main(string[] args)
{
if (args.Length != 2)
{
Console.WriteLine("Please provide topic name to read and SMTP topic name");
}
else
{
var services = new ServiceCollection();
services.AddSingleton<ConsumerConfig, ConsumerConfig>();
services.AddSingleton<ProducerConfig, ProducerConfig>();
var serviceProvider = services.BuildServiceProvider();
var cConfig = serviceProvider.GetService<ConsumerConfig>();
var pConfig = serviceProvider.GetService<ProducerConfig>();
cConfig.BootstrapServers = Environment.GetEnvironmentVariable("consumer_bootstrap_servers");
cConfig.GroupId = "confluence-consumer";
cConfig.EnableAutoCommit = true;
cConfig.StatisticsIntervalMs = 5000;
cConfig.SessionTimeoutMs = 6000;
cConfig.AutoOffsetReset = AutoOffsetReset.Earliest;
cConfig.EnablePartitionEof = true;
pConfig.BootstrapServers = Environment.GetEnvironmentVariable("producer_bootstrap_servers");
var consumer = new ConsumerHelper(cConfig, args[0]);
messages = new Dictionary<string, Dictionary<string, UserMsg>>();
var result = consumer.ReadMessage();
while (result != null && !result.IsPartitionEOF)
{
Console.WriteLine($"Current consumed msg-json: {result.Message.Value}");
...
result = consumer.ReadMessage();
}
consumer.Close();
Console.WriteLine($"Done consuming messages from topic {args[0]}");
}
}
class ConsumerHelper.cs
namespace AggregateMailing
{
using System;
using Confluent.Kafka;
public class ConsumerHelper
{
private string _topicName;
private ConsumerConfig _consumerConfig;
private IConsumer<string, string> _consumer;
public ConsumerHelper(ConsumerConfig consumerConfig, string topicName)
{
try
{
_topicName = topicName;
_consumerConfig = consumerConfig;
var builder = new ConsumerBuilder<string, string>(_consumerConfig);
_consumer = builder.Build();
_consumer.Subscribe(_topicName);
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on ConsumerHelper: {exc.ToString()}");
}
}
public ConsumeResult<string, string> ReadMessage()
{
Console.WriteLine("ReadMessage: start");
try
{
return _consumer.Consume();
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on ReadMessage: {exc.ToString()}");
return null;
}
}
public void Close()
{
Console.WriteLine("Close: start");
try
{
_consumer.Close();
_consumer.Dispose();
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on Close: {exc.ToString()}");
}
}
}
}
Not working app - consuming only on first run after changing consumer groupId to one never in use:
class Program.cs
class Program
{
private static SmtpClient smtpClient;
private static Random random = new Random();
static void Main(string[] args)
{
try
{
var services = new ServiceCollection();
services.AddSingleton<ConsumerConfig, ConsumerConfig>();
services.AddSingleton<SmtpClient>(new SmtpClient("smtp.gmail.com"));
var serviceProvider = services.BuildServiceProvider();
var cConfig = serviceProvider.GetService<ConsumerConfig>();
cConfig.BootstrapServers = Environment.GetEnvironmentVariable("consumer_bootstrap_servers");
cConfig.GroupId = "smtp-consumer";
cConfig.EnableAutoCommit = true;
cConfig.StatisticsIntervalMs = 5000;
cConfig.SessionTimeoutMs = 6000;
cConfig.AutoOffsetReset = AutoOffsetReset.Earliest;
cConfig.EnablePartitionEof = true;
var consumer = new ConsumerHelper(cConfig, args[0]);
...
var result = consumer.ReadMessage();
while (result != null && !result.IsPartitionEOF)
{
Console.WriteLine($"current consumed message: {result.Message.Value}");
var msg = JsonConvert.DeserializeObject<EmailMsg>(result.Message.Value);
result = consumer.ReadMessage();
}
Console.WriteLine("Done sending emails consumed from SMTP topic");
consumer.Close();
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on Main: {exc.ToString()}");
}
}
class ConsumerHelper.cs
using Confluent.Kafka;
using System;
using System.Collections.Generic;
namespace Mailer
{
public class ConsumerHelper
{
private string _topicName;
private ConsumerConfig _consumerConfig;
private IConsumer<string, string> _consumer;
public ConsumerHelper(ConsumerConfig consumerConfig, string topicName)
{
try
{
_topicName = topicName;
_consumerConfig = consumerConfig;
var builder = new ConsumerBuilder<string, string> (_consumerConfig);
_consumer = builder.Build();
_consumer.Subscribe(_topicName);
//_consumer.Assign(new TopicPartition(_topicName, 0));
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on ConsumerHelper: {exc.ToString()}");
}
}
public ConsumeResult<string, string> ReadMessage()
{
Console.WriteLine("ConsumeResult: start");
try
{
return _consumer.Consume();
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on ConsumeResult: {exc.ToString()}");
return null;
}
}
public void Close()
{
Console.WriteLine("Close: start");
try
{
_consumer.Close();
_consumer.Dispose();
}
catch (System.Exception exc)
{
Console.WriteLine($"Error on Close: {exc.ToString()}");
}
Console.WriteLine("Close: end");
}
}
}
I'd like to wrap all the controller methods using Spring AOP for Error Handling.
But, How to send e.getMessage() in catch block to ${errorMessage} in error.html properly?
Thanks for the response!
#Pointcut("within(com.test.mvc.controller.*) && #within(org.springframework.stereotype.Controller)")
public void controllerLayer() {
}
#Pointcut("execution(public String *(..))")
public void publicMethod() {
}
#Pointcut("controllerLayer() && publicMethod()")
public void controllerPublicMethod() {
}
#Around("controllerPublicMethod()")
public String processRequest(ProceedingJoinPoint joinPoint) {
try {
return (String) joinPoint.proceed();
} catch (Throwable e) {
LOGGER.info("{}", e.getMessage());
return "error.html";
}
}
<body>
<h1>Something went wrong!</h1>
<h3 th:text="'Error Message: ' + ${errorMessage}"></h3>
</body>
Following aspect can set the request attribute to display errorMessage.
#Around("controllerPublicMethod()")
public Object processRequest(ProceedingJoinPoint joinPoint) {
Object object=null;
try {
object = joinPoint.proceed();
} catch (Throwable e) {
RequestContextHolder.getRequestAttributes().setAttribute("errorMessage",e.getMessage(),0); // scope 0 - request , 1 - session
return "error.html";
}
return object;
}
I recommend you please explore the possibilities of #ControllerAdvice
In a REST service adding a circuit breaker with hystrix, I could do the following:
#HystrixCommand(fallbackMethod = "getBackupResult")
#GetMapping(value = "/result")
public ResponseEntity<ResultDto> getResult(#RequestParam("request") String someRequest) {
ResultDto resultDto = service.parserRequest(someRequest);
return new ResponseEntity<>(resultDto, HttpStatus.OK);
}
public ResponseEntity<ResultDto> getBackupResult(#RequestParam("request") String someRequest) {
ResultDto resultDto = new ResultDto();
return new ResponseEntity<>(resultDto, HttpStatus.OK);
}
Is there something similar I can do for the gRPC call?
public void parseRequest(ParseRequest request, StreamObserver<ParseResponse> responseObserver) {
try {
ParseResponse parseResponse = service.parseRequest(request.getSomeRequest());
responseObserver.onNext(parseResponse);
responseObserver.onCompleted();
} catch (Exception e) {
logger.error("Failed to execute parse request.", e);
responseObserver.onError(new StatusException(Status.INTERNAL));
}
}
I solved my problem by implementing the circuit-breaker on my client. I used the sentinel library
To react on exceptions ratio for example I added this rule:
private static final String KEY = "callGRPC";
private void callGRPC(List<String> userAgents) {
initDegradeRule();
ManagedChannel channel = ManagedChannelBuilder.forAddress(grpcHost, grpcPort).usePlaintext()
.build();
for (String userAgent : userAgents) {
Entry entry = null;
try {
entry = SphU.entry(KEY);
UserAgentServiceGrpc.UserAgentServiceBlockingStub stub
= UserAgentServiceGrpc.newBlockingStub(channel);
UserAgentParseRequest request = UserAgentRequest.newBuilder().setUserAgent(userAgent).build();
UserAgentResponse userAgentResponse = stub.getUserAgentDetails(request);
} catch (BlockException e) {
logger.error("Circuit-breaker is on and the call has been blocked");
} catch (Throwable t) {
logger.error("Exception was thrown", t);
} finally {
if (entry != null) {
entry.exit();
}
}
}
channel.shutdown();
}
private void initDegradeRule() {
List<DegradeRule> rules = new ArrayList<DegradeRule>();
DegradeRule rule = new DegradeRule();
rule.setResource(KEY);
rule.setCount(0.5);
rule.setGrade(RuleConstant.DEGRADE_GRADE_EXCEPTION_RATIO);
rule.setTimeWindow(60);
rules.add(rule);
DegradeRuleManager.loadRules(rules);
}
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
super.userEventTriggered(ctx, evt);
if (evt instanceof IdleStateEvent) {
IdleStateEvent event = (IdleStateEvent) evt;
if (event.state().equals(IdleState.READER_IDLE)) {
System.out.println("READER_IDLE");
ctx.disconnect();
ctx.close();
}
}
}
Do I need invoke ctx.disconnect() before ctx.close()?