In my .NET Core Web API, I have implemented the transactional outbox pattern to monitor a database table and publish messages to an Azure Service Bus topic whenever a record appears in the database table. This takes place within a hosted service class that inherits from Microsoft.Extensions.Hosting.BackgroundService. This is a stripped-down version of what I have:
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
try
{
IEnumerable<RelayMessage> messagesToSend = new List<RelayMessage>();
// _scopeFactory is an implementation of Microsoft.Extensions.DependencyInjection.IServiceScopeFactory:
using (var scope = _scopeFactory.CreateScope())
{
var dbContext = scope.ServiceProvider.GetRequiredService<MyDbContext>();
while (!stoppingToken.IsCancellationRequested)
{
messagesToSend = await dbContext.RelayMessage.ToListAsync();
foreach (var message in messagesToSend)
{
try
{
await SendMessageToAzureServiceBus(message);
dbContext.RelayMessage.Remove(message);
dbContext.SaveChanges();
}
catch (Exception ex)
{
Log.Error(ex, $"Could not send message with id {message.RelayMessageId}.");
}
}
await Task.Delay(5000, stoppingToken);
}
}
await Task.CompletedTask;
}
catch (Exception ex)
{
Log.Error(ex, "Exception thrown while processing messages.");
}
The records are being deleted from the database, but the following exception gets thrown on the call to SaveChanges():
Microsoft.EntityFrameworkCore.DbUpdateConcurrencyException: Database operation expected to affect 1 row(s) but actually affected 0 row(s). Data may have been modified or deleted since entities were loaded. See http://go.microsoft.com/fwlink/?LinkId=527962 for information on understanding and handling optimistic concurrency exceptions.
at Microsoft.EntityFrameworkCore.Update.AffectedCountModificationCommandBatch.ThrowAggregateUpdateConcurrencyException(Int32 commandIndex, Int32 expectedRowsAffected, Int32 rowsAffected)
at Microsoft.EntityFrameworkCore.Update.AffectedCountModificationCommandBatch.ConsumeResultSetWithoutPropagation(Int32 commandIndex, RelationalDataReader reader)
at Microsoft.EntityFrameworkCore.Update.AffectedCountModificationCommandBatch.Consume(RelationalDataReader reader)
at Microsoft.EntityFrameworkCore.Update.ReaderModificationCommandBatch.Execute(IRelationalConnection connection)
at Microsoft.EntityFrameworkCore.Update.Internal.BatchExecutor.Execute(IEnumerable`1 commandBatches, IRelationalConnection connection)
at Microsoft.EntityFrameworkCore.Storage.RelationalDatabase.SaveChanges(IList`1 entries)
at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChanges(IList`1 entriesToSave)
at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChanges(DbContext _, Boolean acceptAllChangesOnSuccess)
at Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerExecutionStrategy.Execute[TState,TResult](TState state, Func`3 operation, Func`3 verifySucceeded)
at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChanges(Boolean acceptAllChangesOnSuccess)
at Microsoft.EntityFrameworkCore.DbContext.SaveChanges(Boolean acceptAllChangesOnSuccess)
at Microsoft.EntityFrameworkCore.DbContext.SaveChanges()
at ReinsuranceReferenceSystemApi.Services.ServiceBus.ParticipantPublishingService.ExecuteAsync(CancellationToken stoppingToken)
I did check out the link in the exception message, but am not sure if the information applies to my situation. The RelayMessage instance is created and saved to the database (in a method not shown here), then this method reads it and deletes it. There aren't any modifications of this type anywhere in the application, so I'm unclear on how this could be a concurrency issue.
I'd appreciate any help.
EDIT:
Here's the registration of my DbContext in Startup.cs:
services.AddDbContext<MyDbContext>(o =>
{
o.UseSqlServer(Configuration.GetConnectionString("MyConnectionString"));
});
Related
I have an application that uses Spring Integration to send messages to a vendor application over TCP and receive and process responses. The vendor sends messages without a length header or an message-ending token and the message contains carriage returns so I have implemented a custom deserializer. The messages are sent as XML strings so I have to process the input stream, looking for a specific closing tag to know when the message is complete. The application works as expected until the vendor application is restarted or a port switch occurs on my application, at which time the CPU usage on my application spikes and the application becomes unresponsive. The application throws a SocketException: o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Send Failed; nested exception is java.net.SocketException: Connection or outbound has closed when the socket closes. I have set the SocketTimeout to be 1 minute.
Here is the connection factory implementation:
#Bean
public AbstractClientConnectionFactory tcpConnectionFactory() {
TcpNetClientConnectionFactory factory = new TcpNetClientConnectionFactory(this.serverIp,
Integer.parseInt(this.port));
return getAbstractClientConnectionFactory(factory, keyStoreName, trustStoreName,
keyStorePassword, trustStorePassword, hostVerify);
}
private AbstractClientConnectionFactory getAbstractClientConnectionFactory(
TcpNetClientConnectionFactory factory, String keyStoreName, String trustStoreName,
String keyStorePassword, String trustStorePassword, boolean hostVerify) {
TcpSSLContextSupport sslContextSupport = new DefaultTcpSSLContextSupport(keyStoreName,
trustStoreName, keyStorePassword, trustStorePassword);
DefaultTcpNetSSLSocketFactorySupport tcpSocketFactorySupport =
new DefaultTcpNetSSLSocketFactorySupport(sslContextSupport);
factory.setTcpSocketFactorySupport(tcpSocketFactorySupport);
factory.setTcpSocketSupport(new DefaultTcpSocketSupport(hostVerify));
factory.setDeserializer(new MessageSerializerDeserializer());
factory.setSerializer(new MessageSerializerDeserializer());
factory.setSoKeepAlive(true);
factory.setSoTimeout(60000);
return factory;
}
Here is the deserialize method:
private String readUntil(InputStream inputStream) throws IOException {
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
String s = "";
byte[] closingTag = CLOSING_MESSAGE_TAG.getBytes(ASCII);
try {
Integer bite;
while (true) {
bite = inputStream.read();
byteArrayOutputStream.write(bite);
byte[] bytes = byteArrayOutputStream.toByteArray();
int start = bytes.length - closingTag.length;
if (start > closingTag.length) {
byte[] subarray = Arrays.copyOfRange(bytes, start, bytes.length);
if (Arrays.equals(subarray, closingTag)) {
s = new String(bytes, ASCII);
break;
}
}
}
} catch (SocketTimeoutException e) {
logger.error("Expected SocketTimeoutException thrown");
} catch (Exception e) {
logger.error("Exception thrown when deserializing message {}", s);
throw e;
}
return s;
}
Any help in identifying the cause of the CPU spike or a suggested fix would be greatly appreciated.
EDIT #1
Adding serialize method.
#Override
public void serialize(String string, OutputStream outputStream) throws IOException {
if (StringUtils.isNotEmpty(string) && StringUtils.startsWith(string, OPENING_MESSAGE_TAG) &&
StringUtils.endsWith(string, CLOSING_MESSAGE_TAG)) {
outputStream.write(string.getBytes(UTF8));
outputStream.flush();
}
}
the inbound-channel-adapter uses the ConnectionFactory
<int-ip:tcp-inbound-channel-adapter id="tcpInboundChannelAdapter"
channel="inboundReceivingChannel"
connection-factory="tcpConnectionFactory"
error-channel="errorChannel"
/>
EDIT #2
Outbound Channel Adapter
<int-ip:tcp-outbound-channel-adapter
id="tcpOutboundChannelAdapter"
channel="sendToTcpChannel"
connection-factory="tcpConnectionFactory"/>
Edit #3
We have added in the throw for the Exception and are still seeing the CPU spike, although it is not as dramatic. Could we still be receiving bytes from socket in the inputStream.read() method? The metrics seem to indicate that the read method is consuming server resources.
#Artem Bilan Thank you for your continued feedback on this. My server metrics seem to indicate that they deserialize method is what is consuming the CPU. I was thinking that the SendFailed error occurs because of the vendor restarting their application.
Thus far, I have been unable to replicate this issue other than in production. The only exception I can find in production logs is the SocketException mentioned above.
Thank you.
I'm using Confluent.Kafka(1.4.4) in a .netCore project as a message broker. In the startup of the project I set only "bootstrapservers" to the specific servers which were in the appSetting.json file and I produce messages in an API when necessary with the code below in related class:
public async Task WriteMessage<T>(string topicName, T message)
{
using (var p = new ProducerBuilder<Null, string>(_producerConfig).Build())
{
try
{
var serializedMessage= JsonConvert.SerializeObject(message);
var dr = await p.ProduceAsync(topicName, new Message<Null, string> { Value = serializedMessage });
logger.LogInformation($"Delivered '{dr.Value}' to '{dr.TopicPartitionOffset}'");
}
catch (ProduceException<Null, string> e)
{
logger.LogInformation($"Delivery failed: {e.Error.Reason}");
}
}
}
I have also added the following code In the consumer solution :
public async Task Run()
{
using (var consumerBuilder = new ConsumerBuilder<Ignore, string>(_consumerConfig).Build())
{
consumerBuilder.Subscribe(new List<string>() { "ActiveMemberCardForPanClubEvent", "CreatePanClubEvent", "RemovePanClubEvent"
});
CancellationTokenSource cts = new CancellationTokenSource();
Console.CancelKeyPress += (_, e) =>
{
e.Cancel = true; // prevent the process from terminating.
cts.Cancel();
};
try
{
while (true)
{
try
{
var consumer = consumerBuilder.Consume(cts.Token);
if (consumer.Message != null)
{
using (LogContext.PushProperty("RequestId", Guid.NewGuid()))
{
//Do something
logger.LogInformation($"Consumed message '{consumer.Message.Value}' at: '{consumer.TopicPartitionOffset}'.");
await DoJob(consumer.Topic, consumer.Message.Value);
consumer.Topic.Remove(0, consumer.Topic.Length);
}
}
else
{
logger.LogInformation($"message is null for topic '{consumer.Topic}'and partition : '{consumer.TopicPartitionOffset}' .");
consumer.Topic.Remove(0, consumer.Topic.Length);
}
}
catch (ConsumeException e)
{
logger.LogInformation($"Error occurred: {e.Error.Reason}");
}
}
}
catch (OperationCanceledException)
{
// Ensure the consumer leaves the group cleanly and final offsets are committed.
consumerBuilder.Close();
}
}
}
I produce a message and when the consumer project is run everything goes perfectly and the message is being read in the consumer solution.
The problem is raised when the consumer project is not run and I queue a message in the API with the message producer in API. After running consumers there is not any valid message for that topic that it's message is being produced.
I am familiar with and have experiences with message brokers and I know that by sending a message it will be on the bus until it is being used but I don't understand why it doesn't work with Kafka in this project.
The default setting for the "auto.offset.reset" Consumer property is "latest".
That means (in the context of no offsets being written yet) if you write a message to some topic and then subsequently start the consumer, it will skip past any messages written before the consumer was started. This could be why your consumer is not seeing the messages queued by your producer.
The solution is to set "auto.offset.reset" to "earliest" which means that the consumer will start from the earliest offset on the topic.
https://docs.confluent.io/current/installation/configuration/consumer-configs.html#auto.offset.reset
I have a message handler which accumulates the messages in a MemoryCache for a given time, so that only the last one will be handled.
When the callback happens i want to forward another message to an handler using sql transport, but the sql connection has now been closed.
The code looks something like this:
public IBus SqlBus { get; set; }
public async Task Handle(ServiceMessage message)
{
await base.Handle(() =>
{
cache.Set(CacheKey, message, new CacheItemPolicy()
{
AbsoluteExpiration = DateTimeOffset.Now.AddSeconds(10),
RemovedCallback = new CacheEntryRemovedCallback(CacheCallback),
});
return Task.FromResult(0);
}, message);
}
private void CacheCallback(CacheEntryRemovedArguments arguments)
{
if (arguments.RemovedReason == CacheEntryRemovedReason.Expired)
{
var message = arguments.CacheItem.Value as ServiceMessage;
SqlBus.Send(new AnotherMessage()).GetAwaiter().GetResult();
}
}
Is there any approaches which lets me do this?
When is the CacheCallback method called, and on which thread?
It sounds to me like the problem is that the thread calling CacheCallback has a value in AmbientTransactionContext.Current, which is where Rebus enlists queue operations when it can.
If the transaction context is was somehow preserved even though the handler finished executing, then the associated cached items (like e.g. the SqlConnection and SqlTransaction associated with the SQL transport) will be closed.
I am using MVVM, inparticular MVVMLight. For boradcasting to all of my modelviews, that no internet connection is available I am using Messenger class. The modelviews subscribe to this event in order to reload itself with offline data, inform user etc.
However, I have a problem. When I have the folowing handler:
private void HandleNoInternetMessage(NoInternetAccessMessage obj)
{
Task.Run(async () => await InitializeForOfflineInternalAsync());
}
public async Task InitializeForOfflineInternalAsync()
{
try
{
WaitingLayerViewModel.ShouldBeVisible = true;
WaitingLayerViewModel.IsBusy = true; //<--exception HRESULT: 0x8001010E (RPC_E_WRONG_THREAD)
bool switchToOffline = await CommonViewModelProvider.InformUserOfNoInternetAccessAndChangeAppState(); //<!- CoreWindow.GetForCurrentThread().Dispatcher is null
await FilterTestItemViewModel.InitializeForOfflineAsync();
await FilterTestItemViewModel.InitializeForOfflineAsync();
WaitingLayerViewModel.ShouldBeVisible = false;
WaitingLayerViewModel.IsBusy = false;
...
}
}
I got exception HRESULT: 0x8001010E (RPC_E_WRONG_THREAD), because in InitializeForOfflineInternalAsync I am changing some properties of the viewmodel wchich are bound in XAML (or at least I think it is because of that). However, it is weird, because I am changing in other code bound properties regularly and have no problems with it (and the thread is a working thread).
Now, how can i solve that?
The messanger let me provide only delegate which is not async (which make kind of sense), so I can not have the HandleNoInternetMessage method async
I am using async await ... no explicit spawning of threads
I dont have access in VM to Dispatcher, because I am in VM which should not know about platform dependent stuff. And when I tried to use it to show a message, NullPointer excpetion was thrown when calling CoreWindow.GetForCurrentThread().Dispatcher; And again when calling from other places, no such exception was thrown
I guess the question is How I can safely run async code, which changes boudn properties, when handling messages from Messenger?
You're responding to messages that are logically events, so this is an acceptable use case for async void.
private async void HandleNoInternetMessage(NoInternetAccessMessage obj)
{
await InitializeForOfflineInternalAsync();
}
public async Task InitializeForOfflineInternalAsync()
{
try
{
WaitingLayerViewModel.ShouldBeVisible = true;
WaitingLayerViewModel.IsBusy = true;
bool switchToOffline = await CommonViewModelProvider.InformUserOfNoInternetAccessAndChangeAppState();
await FilterTestItemViewModel.InitializeForOfflineAsync();
await FilterTestItemViewModel.InitializeForOfflineAsync();
WaitingLayerViewModel.ShouldBeVisible = false;
WaitingLayerViewModel.IsBusy = false;
...
}
}
Remember that Task.Run is for CPU-bound code (as I describe on my blog).
Currently in our ASP.NET app we have 1 session per Request, and create one transaction every time we load or update and object. See below:
public static T FindById<T>(object id)
{
ISession session = NHibernateHelper.GetCurrentSession();
ITransaction tx = session.BeginTransaction();
try
{
obj = session.Get<T>(id);
tx.Commit();
}
catch
{
session.Close();
throw;
}
finally
{
tx.Dispose();
}
return obj;
}
public virtual void Save()
{
ISession session = NHibernateHelper.GetCurrentSession();
ITransaction transaction = session.BeginTransaction();
try
{
if (!IsPersisted)
{
session.Save(this);
}
else
{
session.SaveOrUpdateCopy(this);
}
transaction.Commit();
}
catch (HibernateException)
{
if (transaction != null)
{
transaction.Rollback();
}
if (session.IsOpen)
{
session.Close();
}
throw;
}
finally
{
transaction.Dispose();
}
}
Obviously this isn't ideal as it means you create a new connection to the database every time you load or save an object, which incurs performance overhead.
Questions:
If an entity is already loaded in the
1st level cache will the
GetTransaction() call open a database
connection? I suspect it will...
Is there a better way to handle our transaction management so
there are less transactions and therefore
less database connections?
Unfortunately the app code is probably too mature to structure everything like so (with the get and update all in the same transaction):
using(var session = sessionFactory.OpenSession())
using(var tx = session.BeginTransaction())
{
var post = session.Get<Post>(1);
// do something with post
tx.Commit();
}
Would it be a terrible idea to create one transaction per Request and commit it at the end of the request? I guess the downside is that it ties up one database connection while non-database operations take place.
One transaction Per Request is concidered as best practice with NHibernate. This pattern is implemented in Sharp Architecture.
But in Nhibernate method BeginTransaction() doest open connection to DB. Connection is opened at first real sql request and closed just after query is executed. So Nhibernate holds open connection for some seconds to perform query. You can verify it by SQL Profiler.
Additionally NHiberante always try to use Sql Servers connection pool and that why opening your connection may be not so expensive.
Would it be a terrible idea to create one transaction per Request and commit it at the end of the request
It wouldn't be terrible but I think it's a poor practice. If there is an error and the transaction is rolled back, I would much rather handle it on the page then at the end of the request. I prefer to use one session per request with as many transactions as I need during the request (typically one).
NHibernate is very conscientious about managing its database connections, you don't need to worry about it in most cases.
I don't like your transaction logic, especially since you kill the session if the transaction fails. And I'm not sure why you're calling SaveOrUpdateCopy. NHibernate will detect if the object needs to be persisted so the IsPersisted check is probably not needed. I use this pattern:
using (var txn = session.BeginTransaction())
{
try
{
session.SaveOrUpdate(this);
txn.Commit();
}
catch (Exception ex)
{
txn.Rollback();
// log
// handle, wrap, or throw
}
}