we have bridged our log4net with Jira using SMTP.
Now we are worried that since the site is public what could happen to the Jira server if we get alot of issues in the production environment.
We have already filtered on Critical and Fatal, but we want to see either some acumulator service on log4net or a plain filter which identifies repeating issues and prevents them from being sent via Email. Preferably without having to change the error reporting code, so a config solution would be best.
I guess dumping the log into a db and then create a separate listener some smart code would be a (pricy) alternative.
Maybe this is sufficient for your requirements:
it basically limits the number of emails that are sent in a given time span. I think it should be quite easy to customize this to your needs. I did something similar that even discards messages within a certain time span:
public class SmtpThrottlingAppender : SmtpAppender
{
private DateTime lastFlush = DateTime.MinValue;
private TimeSpan flushInterval = new TimeSpan(0, 5, 0);
public TimeSpan FlushInterval
{
get { return this.flushInterval; }
set { this.flushInterval = value; }
}
protected override void SendBuffer(LoggingEvent[] events)
{
if (DateTime.Now - this.lastFlush > this.flushInterval)
{
base.SendBuffer(events);
this.lastFlush = DateTime.Now;
}
}
}
The flush interval can be configured like normal settings of other appenders:
<flushInterval value="01:00:00" />
You can also use a plain SmtpAppender with a log4net.Core.TimeEvaluator as the Evaluator.
Suppose we have an interval of 5 minutes, and events at 00:00, 00:01 and 01:00.
Stefan Egli's SmtpThrottlingAppender will send emails at 00:00 (event 1) and 01:00 (events 2 and 3).
An SmtpAppender with a TimeEvaluator will send emails at 00:05 (events 1 and 2) and 01:05 (event 3).
Which one you want depends on whether you're more bothered by the guaranteed delay or the potentially large delay.
I attempted the combine the SmptThrottlingAppender with a TimeEvaluator, but couldn't get the behaviour I wanted. I'm beginning to suspect that I should be writing a new ITriggeringEventEvaluator, not a new IAppender.
Related
I am referring this answer:
https://stackoverflow.com/questions/56728833/seektocurrenterrorhandler-deadletterpublishingrecoverer-is-not-handling-deseria#:~:text=It%20works%20fine%20for%20me%20(note%20that%20Boot%20will%20auto-configure%20the%20error%20handler)...
Can we add manual immediate acknowledgement like below:
#KafkaListener(id = "so56728833", topics = "so56728833")
public void listen(Foo in, Acknowledgment ack {
System.out.println(in);
if (in.getBar().equals("baz")) {
throw new IllegalStateException("Test retries");
}
ack.acknowledge();
}
I want this because of following scenario:
Let's say I have processed 100 messages, now while processing next 10 records, my consumer gets down after processing 4 messages. In this case, rebalance will get triggered and this 4 messages will be processed again because I have not committed my offset.
Please help.
Yes, you can use manual immediate here - you can also use AckMode.RECORD and the container will automatically commit each offset after the record has been processed.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#committing-offsets
we have Axon application that stores new Order. For each order state change (OrderStateChangedEvent) it plans couple of tasks. The tasks are triggered and proceeded by yet another Saga (TaskSaga - out of scope of the question)
When I delete the projection database, but leave the event store, then run the application again, the events are replayed (what is correct), but the tasks are duplicated.
I suppose this is because the OrderStateChangedEvent triggers new set of ScheduleTaskCommand each time.
Since I'm new in Axon, can't figure out how to avoid this duplication.
Event store running on AxonServer
Spring boot application autoconfigures the axon stuff
Projection database contains the projection tables and the axon tables:
token_entry
saga_entry
association_value_entry
I suppose all the events are replayed because by recreating the database, the Axon tables are gone (hence no record about last applied event)
Am I missing something?
should the token_entry/saga_entry/association_value_entry tables be part of the DB for the projection tables on each application node?
I thought that the event store might be replayed onto new application node's db any time without changing the event history so I can run as many nodes as I wish. Or I can remove the projection dB any time and run the application, what causes that the events are projected to the fresh db again. Or this is not true?
In general, my problem is that one event produces command leading to new events (duplicated) produced. Should I avoid this "chaining" of events to avoid duplication?
THANKS!
Axon configuration:
#Configuration
public class AxonConfig {
#Bean
public EventSourcingRepository<ApplicationAggregate> applicationEventSourcingRepository(EventStore eventStore) {
return EventSourcingRepository.builder(ApplicationAggregate.class)
.eventStore(eventStore)
.build();
}
#Bean
public SagaStore sagaStore(EntityManager entityManager) {
return JpaSagaStore.builder().entityManagerProvider(new SimpleEntityManagerProvider(entityManager)).build();
}
}
CreateOrderCommand received by Order aggregate (method fromCommand just maps 1:1 command to event)
#CommandHandler
public OrderAggregate(CreateOrderCommand cmd) {
apply(OrderCreatedEvent.fromCommand(cmd))
.andThenApply(() -> OrderStateChangedEvent.builder()
.applicationId(cmd.getOrderId())
.newState(OrderState.NEW)
.build());
}
Order aggregate sets the properties
#EventSourcingHandler
protected void on(OrderCreatedEvent event) {
id = event.getOrderId();
// ... additional properties set
}
#EventSourcingHandler
protected void on(OrderStateChangedEvent cmd) {
this.state = cmd.getNewState();
}
OrderStateChangedEvent is listened by Saga that schedules couple of tasks for the order of the particular state
private Map<String, TaskStatus> tasks = new HashMap<>();
private OrderState orderState;
#StartSaga
#SagaEventHandler(associationProperty = "orderId")
public void on(OrderStateChangedEvent event) {
orderState = event.getNewState();
List<OrderStateAwareTaskDefinition> tasksByState = taskService.getTasksByState(orderState);
if (tasksByState.isEmpty()) {
finishSaga(event.getOrderId());
}
tasksByState.stream()
.map(task -> ScheduleTaskCommand.builder()
.orderId(event.getOrderId())
.taskId(IdentifierFactory.getInstance().generateIdentifier())
.targetState(orderState)
.taskName(task.getTaskName())
.build())
.peek(command -> tasks.put(command.getTaskId(), SCHEDULED))
.forEach(command -> commandGateway.send(command));
}
I think I can help you in this situation.
So, this happens because the TrackingToken used by the TrackingEventProcessor which supplies all the events to your Saga instances is initialized to the beginning of the event stream. Due to this the TrackingEventProcessor will start from the beginning of time, thus getting all your commands dispatched for a second time.
There are a couple of things you could do to resolve this.
You could, instead of wiping the entire database, only wipe the projection tables and leave the token table intact.
You could configure the initialTrackingToken of a TrackingEventProcessor to start at the head of the event stream instead of the tail.
Option 1 would work out find, but requires some delegation from the operations perspective. Option 2 leaves it in the hands of a developer, potentially a little safer than the other solution.
To adjust the token to start at the head, you can instantiate a TrackingEventProcessor with a TrackingEventProcessorConfiguration:
EventProcessingConfigurer configurer;
TrackingEventProcessorConfiguration trackingProcessorConfig =
TrackingEventProcessorConfiguration.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("{class-name-of-saga}Processor",
Configuration::eventStore,
c -> trackingProcessorConfig);
You'd thus create the desired configuration for your Saga and call the andInitialTrackingToken() function and ensuring the creation of a head token of no token is present.
I hope this helps you out Tomáš!
Steven's solution works like a charm but only in Sagas. For those who want to achieve the same effect but in classic #EventHandler (to skip executions on replay) there is a way. First you have to find out how your tracking event processor is named - I found it in AxonDashboard (8024 port on running AxonServer) - usually it is location of a component with #EventHandler annotation (package name to be precise). Then add configuration as Steven indicated in his answer.
#Autowired
public void customConfig(EventProcessingConfigurer configurer) {
// This prevents from replaying some events in #EventHandler
var trackingProcessorConfig = TrackingEventProcessorConfiguration
.forSingleThreadedProcessing()
.andInitialTrackingToken(StreamableMessageSource::createHeadToken);
configurer.registerTrackingEventProcessor("com.domain.notreplayable",
org.axonframework.config.Configuration::eventStore,
c -> trackingProcessorConfig);
}
Rarely and apparently randomly, entity framework will insert many duplicate records. Can anyone explain why this behaviour occurs? This is the second project i've seen this in:
protected void btnAddQual_Click(object sender, EventArgs e)
{
QEntities ds = new QEntities();
Qualification qual = new Qualification();
qual.PersonID = ds.Persons.Where(x => x.Username == User.Identity.Name).Single().PersonID;
qual.QualificationName = txtQualAddName.Text;
qual.QualificationProvider = txtQualAddProvider.Text;
qual.QualificationYear = txtQualAddYear.Text;
qual.Inactive = false;
qual.LastUpdatedBy = User.Identity.Name;
qual.LastUpdatedOn = DateTime.Now;
ds.Qualifications.Add(qual);
ds.SaveChanges();
}
Qualifications Table:
public partial class Qualification
{
public int QualificationID { get; set; }
public int PersonID { get; set; }
public string QualificationName { get; set; }
public string QualificationProvider { get; set; }
public string QualificationYear { get; set; }
public bool Inactive { get; set; }
public string LastUpdatedBy { get; set; }
public Nullable<System.DateTime> LastUpdatedOn { get; set; }
public virtual Persons Persons { get; set; }
}
I've seen it create from three to 32 records in one button click, and when it does, the timestamps which can be spread across a good period of time (last time was 28 records, all identical apart from the primary key and timestamps, unevenly distributed over 23 minutes)
I've previously put this down to user or browser based behaviour, but last night it happened with me using the machine.
I didn't notice anything unusual at the time, but its infrequent occurance makes it a devil to track down. Can anyone suggest a cause?
Edit with additional information:
This is with .net framework 4.5.2 and EF 6.1.3
Edit to explain the bounty:
I've just seen this occur in the following code:
using(exEntities ds = new exEntities())
{
int initialStations;
int finalStations;
int shouldbestations = numStations * numSessions * numRotations * numBlock;
initialStations = ds.Stations.Count();
for(int b = 1; b <= numBlock; b++)
{
for (int se = 1; se <= numSessions; se++)
{
for (int r = 1; r <= numRotations; r++)
{
for (int st = 1; st <= numStations; st++)
{
Stations station = new Stations();
station.EID = eID;
station.Block = b;
station.Rotation = r;
station.Session = se;
station.StationNum = st;
station.LastUpdatedBy = User.Identity.Name + " (Generated)";
station.LastUpdatedOn = DateTime.Now;
station.Inactive = false;
ds.Stations.Add(station);
}
}
}
}
ds.SaveChanges();
In this instance, the number of iterations of each of the loops were:
1, 2, 6, 5 respectively.
This one click (same timestamp) has duplicated the complete set of records
This is a case where you need to add logging into your application. Based on your code, I do not believe Entity Framework is duplicating your data, rather that your code is being triggered in ways that you are not catering for. Where I have seen EF duplicate records has been due to developers passing entities loaded from one DBContext and then associating them to entities created in a second DBContext without checking the DbContext and attaching them first. EF will treat them as "new", and re-insert them. From the code that you have provided, I do not believe this is the case, but it is something to watch out for.
Firstly, when dealing with web applications in particular, you should write any event handler or POST/PATCH API method to treat every call as untrustworthy. Under normal use these methods should do what you expect, but under abusive use or hacking attempts they can be called when they shouldn't, or carry payloads that they shouldn't. For example, you may expect that an event handler with a record ID of 1234 would only be fired when record 1234 is updated and the user pressed the "Save" button (once), however it is possible to:
Receive the event more than once if the client code does not disable the button on click until the event succeeds.
Receive the event without the client clicking the button when fired through a debugging tool.
Receive the event with ID 1234 when a different record is saved, thanks to a user changing the Record ID in a debugging tool.
Trust nothing, verify and log everything, and if something is out of place, terminate the session. (Force log-out)
For logging, beyond the standard exception logging, I would recommend adding Information traces with an additional compiler constant for a production debug build to monitor one of these cases where this event is getting tripped more than once. Personally I use Diagnostics.Trace and then hook a logging handler like Log4Net into it.
I would recommend adding something like the following:
#if DEBUG
Trace.TraceInformation(string.Format("Add Stations called. (eIS: {0})", eID));
Trace.TraceInformation(Environment.StackTrace);
#endif
Then when you do your count check and find a problem:
Trace.TraceWarning(string.Format("Add Station count discrepancy. Expected {0}, Found {1}", shouldBeStations, finalStations));
I put the compiler condition because Environment.StackTrace will incur a cost that you normally do not want in a production system. You can replace "DEBUG" with another custom compilation constant that you enable for a deployment to inspect this issue. Leave it running in the wild and monitor your trace output (database, file, etc.) until the problem manifests. When the warning appears you can go back through the Information traces to see when and where the code is being triggered. You can also put similar tracing in calls that loaded the screen where this event would be triggered to record IDs and User details to see if/how an event might have been triggered for the wrong eID via a code bug or some environmental condition or hack attempt.
Alternatively, for logging you can also consider adding a setting to your application configuration to turn on and off logging events based on logging levels or flags to start/stop capturing scenarios without re-deploying.
These kinds of issues are hard to diagnose and fix over something like StackOverflow, but hopefully logging will help highlight something you hadn't considered. Worst case, consider bringing in someone experienced with EF and your core tech stack short-term to have a look over the system as a second pair of eyes over the entire workings may also help point out potential causes.
One small tip. Rather than something like:
qual.PersonID = ds.Persons.Where(x => x.Username == User.Identity.Name).Single().PersonID;
use:
qual.PersonID = ds.Persons.Where(x => x.Username == User.Identity.Name).Select(x => x.PersonID).Single();
The first statement executes a "SELECT * FROM tblPersons WHERE..." where the entity isn't already cached and pulls back all columns, where only PersonID is needed. The second executes a "SELECT PersonID FROM tblPersons WHERE..."
my scenario is that me as a movie distributor, need to update my clients on new movies, I publish this information on a topic with durable subscribers and clients who want to buy the movie will express their interest.
However, this is where things go south, my implementation of the publisher stops listening as soon as it receives the first reply. Any help would be greatly appreciated. Thank you.
request(Message message)
Sends a request and waits for a reply.
The temporary topic is used for the JMSReplyTo destination; the first reply is returned, and any following replies are discarded.
https://docs.oracle.com/javaee/6/api/javax/jms/TopicRequestor.html
First thing first... I have questions regarding the scenario. Is this some kind of test/exercice, or are we talking about a real world scenario ?
Are all client interested in the movie SEPARATE topic subscribers ? How does that scale ? I the plan to have a topic for every movie, and possible interested parties declaring durable subscribers (one each, for every movie) ? This seems to be abuse of durable subcribers... I would suggest using ONLY one subscriber (in system B) to a "Movie Released" event/topic (from system A), and have some code (in system B) reading all the clients from a DB to send emails/messages/whatever. (If system A and B are the same, it may or not be a good idea to use EMS at all... depends.)
If it is not an exercise, I must comment : Don't use a MOM (EMS, ActiveMQ) to do a DBMS' (Oracle, PostGreSQL) work !
With the disclaimer section done, I suggest an asynchronous subscription approach (These two clips are taken for the EMS sample directory. File tibjmsAsyncMsgConsumer.java).
Extract from the constructor (The main class must implements ExceptionListener, MessageListener):
ConnectionFactory factory = new com.tibco.tibjms.TibjmsConnectionFactory(serverUrl);
/* create the connection */
connection = factory.createConnection(userName,password);
/* create the session */
session = connection.createSession();
/* set the exception listener */
connection.setExceptionListener(this);
/* create the destination */
if (useTopic)
destination = session.createTopic(name);
else
destination = session.createQueue(name);
System.err.println("Subscribing to destination: "+name);
/* create the consumer */
msgConsumer = session.createConsumer(destination);
/* set the message listener */
msgConsumer.setMessageListener(this);
/* start the connection */
connection.start();
The method is then called every time a message arrives.
public void onMessage(Message msg)
{
try
{
System.err.println("Received message: " + msg);
}
catch (Exception e)
{
System.err.println("Unexpected exception in the message callback!");
e.printStackTrace();
System.exit(-1);
}
}
You want to continue reading messages in a loop. Here is an example:
/* read messages */
while (true)
{
/* receive the message */
msg = msgConsumer.receive();
if (msg == null)
break;
if (ackMode == Session.CLIENT_ACKNOWLEDGE ||
ackMode == Tibjms.EXPLICIT_CLIENT_ACKNOWLEDGE ||
ackMode == Tibjms.EXPLICIT_CLIENT_DUPS_OK_ACKNOWLEDGE)
{
msg.acknowledge();
}
System.err.println("Received message: "+ msg);
}
You may want to also consider a possible issue with durable consumers. If your consumers never pick up their messages, storage will continue to grow at the server side. For this reason you may want to send your messages with an a expiration time, and/or limit maximum number of messages (or size in KB/MB/GB) of the JMS topics you are using.
I need some idea to implement the following requirement in the web application .
I am using log4net in a custom dll to log the errors. I completed the log4net implementation and its working fine.[aspx errors are logged in EventLog and the asp errors are logged in FileAppender] .All the loggerError() methods are in the custom dll.
Now i want to monitor the logging,suppose if there is a situation like the loggerError() method is called more than 20 times in just 5 mins bse of Fatal error or database is down,then i want to track that and send email to admin.
My ideas,
1.Set a timer and count variable to track the number of hits .
2.After each hit check the number of hits and the secs.
3.If it exceeds the threshold limit .then trigger the email...
Not sure how this will work or is there any other way to achieve it.
Thanks in advance
I would suggest writing your own Appender to house this logic. The Appender could be used as a wrapper for another Appender (maybe the SmtpAppender?). The ForwardingAppender looks like a good example of a "wrapper" Appender. As each log message comes in, apply your timing and severity logic. If your criteria are met, forward the message to the "wrapped" Appender.
Alternatively this Appender could contain its own logic for generating and sending email (or whatever notification scheme you would like to use).
This Appender could be configured via the standard log4net configuration mechanism (app.config, separate config, etc) so the time span, error level, error count, could be configurable.
Here is an idea for how something like this might be implemented as an Appender:
public class MultiThresholdNotifyingAppender : AppenderSkeleton
{
private Queue<LoggingEvent> loggingEventQueue = new Queue<LoggingEvent>();
private DateTime windowStart;
public Level LevelThreshold { get; set; }
public TimeSpan WindowLength { get; set; }
public int HitThreshold { get; set; }
public void Append(LoggingEvent loggingEvent)
{
if (loggingEvent.Level < LevelThreshold)
{
if (loggingEventQueue.Count == 0) return;
if (loggingEvent.TimeStamp - windowStart) >= WindowLength)
{
//Error level is less than threshold and the time window has elapsed. Remove any queued LoggingEvents
//until all LoggingEvents in the queue fall within the time window.
while (loggingEventQueue.Count > 0 && loggingEventQueue.Peek().TimeStamp - windowStart >= WindowLength)
{
loggingEventQueue.Dequeue();
}
windowStart = loggingEventQueue.Peek().TimeStamp;
return;
}
}
//If we got here, then the Error level is >= the threshold. We want to save the LoggingEvent and we MIGHT
//want to notify the administrator if the other criteria are met (number of errors within time window).
loggingEventQueue.Enqueue(loggingEvent);
//If this is the first error in the queue, start the time window.
if (loggingEventQueue.Count == 1) windowStart = loggingEvent.TimeStamp;
//Too few messages to qualify for notification.
if (loggingEventQueue.Count < HitThreshold - 1) return;
//Now we're talking! A lot of error messages in a short period of time.
if (loggingEvent.TimeStamp - windowStart >= WindowLength)
{
//Build a notification message for the administrator by concatenating the "rendered" version of each LoggingEvent.
string message = string.Join(Enviroment.NewLine, loggingEventQueue.Select(le => le.RenderedMessage));
SendTheMessage(message);
}
//After sending the message, clear the LoggingEvents and reset the time window.
loggingEventQueue.Clear();
windowStart = loggingEvent.TimeStamp;
}
public void Append(LoggingEvent[] loggingEvents)
{
foreach (var le in loggingEvents)
{
Append(le);
}
}
}
The idea is to configure your threshold (maybe along with your notification method) values on the appender. Configure your loggers to send their messages to this appender, in addition to any other appenders you want to send them to (EventLog, File, etc). As each logging message comes through, this appender can examine it in the context of the configured threshold values and send the notification as necessary.
There could very well be threading issues in this code (unless log4net handles that for you), so you might need a lock when accessing the queue.
Note that this code has not been compiled nor has it been tested.
See this link for some sample custom Appenders in the log4net repository:
http://svn.apache.org/viewvc/logging/log4net/trunk/examples/net/2.0/Appenders/
Since this is for a web application, I would suggest using an Application Variable to keep track of the last 10 errors. The next time an error occurs, replace the oldest error (if necessary to keep the error count under 10) with the new error. Put in some logic to check the dates of the error, and adjust the severity level accordingly.