Spring JDBC Batch Update Stop Processing When Not Null Constraint Violation Error Occurred - spring-jdbc

I have 10 data to be inserted into database, but one of them will cause a not null constraint violation. Using JDBC batch update, I'm expecting all valid data processed even some error occurred when processing invalid data, so in the end I'm expecting 9 valid data inserted. The code written below,
jdbcTemplate().batchUpdate(
"INSERT INTO table_x (col_a) VALUES (?)",
new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement preparedStatement, int i) throws SQLException {
preparedStatement.setString(1, "val_a");
}
#Override
public int getBatchSize() {
return 1;
}
});
But somehow, only 1 data inserted successfully into database. What am I suppose to do so all 9 valid data successfully inserted into database, then errors thrown for each invalid data

This issue was solved by implementing Oracle DML Error Logging http://www.dba-oracle.com/t_oracle_dml_error_log.htm

Related

How to manual commit do not recorverd offset already sent DLT through CommonErrorHandler

A simple example is currently being made through the spring kafka.
If an exception occurs at the service layer, I want to commit the original offset after trying to retry and loading it into the dead letter queue.
However, the dead letter queue is loaded properly, but the original message remains in the kafka because the commit is not processed.
To show you my code, it is as follows.
KafkaConfig.java
...
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setCommonErrorHandler(kafkaListenerErrorHandler());
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
return factory;
}
private CommonErrorHandler kafkaListenerErrorHandler() {
DefaultErrorHandler defaultErrorHandler = new DefaultErrorHandler(
new DeadLetterPublishingRecoverer(template, DEAD_TOPIC_DESTINATION_RESOLVER),
new FixedBackOff(1000, 3));
defaultErrorHandler.setCommitRecovered(true);
defaultErrorHandler.setAckAfterHandle(true);
defaultErrorHandler.setResetStateOnRecoveryFailure(false);
return defaultErrorHandler;
}
...
KafkaListener.java
...
#KafkaListener(topics = TOPIC_NAME, containerFactory = "kafkaListenerContainerFactory", groupId = "stock-adjustment-0")
public void subscribe(final String message, Acknowledgment ack) throws IOException {
log.info(String.format("Message Received : [%s]", message));
StockAdjustment stockAdjustment = StockAdjustment.deserializeJSON(message);
if(stockService.isAlreadyProcessedOrderId(stockAdjustment.getOrderId())) {
log.info(String.format("AlreadyProcessedOrderId : [%s]", stockAdjustment.getOrderId()));
} else {
if(stockAdjustment.getAdjustmentType().equals("REDUCE")) {
stockService.decreaseStock(stockAdjustment);
}
}
ack.acknowledge(); // <<< does not work!
}
...
Stockservice.java
...
if(stockAdjustment.getQty() > stock.getAvailableStockQty()) {
throw new RuntimeException(String.format("Stock decreased Request [decreasedQty: %s][availableQty : %s]", stockAdjustment.getQty(), stock.getAvailableStockQty()));
}
...
At this time, when RuntimeException occur in the service layer as above, the DLT is issued through an CommonErrorhandler according to the Kafka setting.
However, after issuing the DLT, the original message remains in Kafka, so there is a need for a solution.
I looked it up and found that the setting I wrote is processed through SeekUtils.seekOrRecover(), and if it is not processed even if the maximum number of attempts is not processed, an exception occurs and the original offset is rolled back without processing a commit.
According to the document, it seems that the AfterRollbackProcessor handles rollback if it fails with the default value, but I don't know how to write the code to commit even if it fails.
EDITED
The above code and settings work normally.
I thought the consumer lag would occur, but when I judged the actual logic code(SeekUtils.seekOrRecover()) and checked the offset commit and lag, I confirmed that it works normally.
I think it was caused by my mistake.
Records are never removed (until they expire), the consumer's committed offset is updated.
Use kafka-consumer-groups.sh to describe the group to see the committed offset for the failed record that was sent to the DLT.

DbUpdateConcurrencyException with EntityFramework on RemoveRange

We are experiencing something very curious with EntityFramework that we are having a hard time debugging and understanding.
We have a service, which debounces index updates, such that we'll update the index when all changes are made and not for every change. The code looks something like this:
var messages = await _debouncingDbContext.DebounceMessages
.Where(message => message.ElapsedTime <= now)
.Take(_internalDebouncingOptionsMonitor.CurrentValue.ReturnLimit)
.ToListAsync(stoppingToken)
.ConfigureAwait(false);
if (!messages.Any())
return;
var tasks = messages.Select(m =>
_messageSession.Send(m.ReplyTo, new HandleDebouncedMessage {Message = m.OriginalMessage}))
.ToList();
try
{
await Task.WhenAll(tasks).ConfigureAwait(false);
}
catch (Exception e)
{
//Exception handling
}
_debouncingDbContext.DebounceMessages.RemoveRange(messages);
await _debouncingDbContext.SaveChangesAsync().ConfigureAwait(false);
While this is being run, we have another thread that can update the ElapsedTime on the entries. This happens if a new event comes in, before the debounce timer expires.
What we experience is the that the await _debouncingDbContext.SaveChangesAsync().ConfigureAwait(false); throws a DbUpdateConcurrencyException.
The result is that the entries are not being deleted and consequently being queried out over and over again in the initial query. This leads to an exponential growth in our index updates where the same few items are being updated over and over again. Eventually, the system dies.
The only fix we have for this right now, is restarting the service. Once that is done, the next iteration picks up the troublesome messages just fine and everything works again.
I am having a hard time understanding how this can happen. It appears that the dbcontext thinks that the entries are deleted, when in fact that are not. Somehow the DBContext gets decoupled from the database state.
I cannot understand how this can happen, when the only thing potentially being changed on the database entry it self is a timestamp and not the actual ID by which it is deleted.
EDIT 18th of November.
Adding a little more context.
The database model, looks like this:
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int Id { get; set; }
public string Key { get; set; }
public string OriginalMessage { get; set; }
public string ReplyTo { get; set; }
public DateTimeOffset ElapsedTime { get; set; }
The only thing configured on the dbcontext is two indexes:
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.Entity<DebounceMessageWrapper>()
.HasIndex(m => m.Key);
modelBuilder.Entity<DebounceMessageWrapper>()
.HasIndex(m => m.ElapsedTime);
}
The flow is quite simple.
We have a dotnet hosted service extending the abstract BackgroundService class from dotnet core. This runs in a while(!stoppingToken.IsCancellationRequested) loop with the initial code above and a Task.Delay(Y) in each loop. All the above code does, is it queries all the messages with an ElapsedTime greater than the allowed timespan. For each of those messages, it returns it to the ReplyTo and then deletes all the corresponding database entries. It is this delete that fails.
Secondly, we have a MessageHandler listening for events on a RabbitMQ. This spawns a thread per physical core on the host. Each of these threads receives messages and looks up the messages based of the key on the database model. If the message already exist, the ElapsedTime is updated, if not, the message is inserted in the database.
This gives us X+1 threads, where X equals the number of physical cores on the host, that potentially can alter the database. Each of these threads uses it's own scope and thereby unique instance of the DBContext.
The idea of this service, is as mentioned to debounce index updates. The nature of our system makes these index updates come in bulks and there is no reason to update the index for each of the updates, if it can be done by one index update when all the changes are finished.

Entity framework throwing cannot insert null value exception

I have this code which adds restaurant to datbase
public Restaurant Add(Restaurant newRestaurant)
{
db.Restaurants.Add(new Restaurant());
return newRestaurant;
}
public int Commit()
{
return db.SaveChanges();
}
When i call Commit i get error:
SqlException: Cannot insert the value NULL into column 'Location',
table 'OdeToFood.dbo.Restaurants'; column does not allow nulls. INSERT
fails. The statement has been terminated
Although column Location is populated from GUI. See here
But still it throws error. Anyone idea what it can cause this issue? Location column is set to nvarchar and its non nullable.
Your not passing the newResturant object to EF, you're passing a new Resturant()
public Restaurant Add(Restaurant newRestaurant)
{
db.Restaurants.Add(new Restaurant()); <--- this right here is a brand new object
return newRestaurant;
}
You should be doing this
public Restaurant Add(Restaurant newRestaurant)
{
db.Restaurants.Add(newRestaurant);
return newRestaurant;
}

Do not get SQL Connection Exception when using EF5 Code First

Using EF5 Code First and Generic Repository/UOW pattern connecting to SQL Server 2008 R2 DB, When entering an invalid server entry in the connection string via app config, an invalid connection exception is not thrown - it runs through the model creating method and nothing happens, I was expecting an exception to be thrown which I can capture and return the information back to the user,
does anyone have any ideas why the exception is not being thrown.
I include code examples below
thanks in advance
Mark
BaseFootballContext which takes in the connection string (if I pass in an invalid string which points to a server I cannot connect to via Query Management Tool
public class BaseFootballContext : DbContext
{
public BaseFootballContext(string nameOrConnectionString) : base(nameOrConnectionString)
{
Configuration.LazyLoadingEnabled = false;
Configuration.ProxyCreationEnabled = false;
Configuration.AutoDetectChangesEnabled = false;
}
public IDbSet<Booking> Bookings { get; set; }
// other IDbSets exist
/// <summary>
/// Set Primary Keys and other properties here using Fluent API
/// </summary>
/// <param name="modelBuilder"></param>
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Entity<Booking>()
.Property(x => x.Id)
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
modelBuilder.Entity<Booking>().HasKey(x => x.Id);
modelBuilder.Entity<Booking>().Property(x => x.Version).IsRowVersion();
modelBuilder.Entity<Booking>().Ignore(x => x.IsBrief);
modelBuilder.Entity<Booking>().Property(x => x.ModifiedDate).HasColumnType("datetime2");
modelBuilder.Entity<Booking>().Property(x => x.CreatedDate).HasColumnType("datetime2");
}
}
I have no exception handling higher up the chain, when debugging it goes into the model creating method and just carries on as normal, was expecting a connection exception to be thrown here.
(moving from comment) The process that Mark listed does not trigger database init or a call to the database. He then confirmed that he was executing a query on the context which would, indeed trigger a call to the database and should have thrown an exception to alert him to a problem with the connection string. So the question became one of hunting down the exception which was being "swallowed" by a timeout error. See details about this in the above comments.

How can I disable the use of the __MigrationHistory table in Entity Framework 4.3 Code First?

I'm using Entity Framework 4.3 Code First with a custom database initializer like this:
public class MyContext : DbContext
{
public MyContext()
{
Database.SetInitializer(new MyContextInitializer());
}
}
public class MyContextInitializer : CreateDatabaseIfNotExists<MyContext>
{
protected override void Seed(MyContext context)
{
// Add defaults to certain tables in the database
base.Seed(context);
}
}
Whenever my model changes, I edit my POCO's and mappings manually and I update my database manually.
When I run my application again, I get this error:
Server Error in '/' Application.
The model backing the 'MyContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269).
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.InvalidOperationException: The model backing the 'MyContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269).
Using EFProfiler, I also notice these queries being executed:
-- statement #1
SELECT [GroupBy1].[A1] AS [C1]
FROM (SELECT COUNT(1) AS [A1]
FROM [dbo].[__MigrationHistory] AS [Extent1]) AS [GroupBy1]
-- statement #2
SELECT TOP (1) [Project1].[C1] AS [C1],
[Project1].[MigrationId] AS [MigrationId],
[Project1].[Model] AS [Model]
FROM (SELECT [Extent1].[MigrationId] AS [MigrationId],
[Extent1].[CreatedOn] AS [CreatedOn],
[Extent1].[Model] AS [Model],
1 AS [C1]
FROM [dbo].[__MigrationHistory] AS [Extent1]) AS [Project1]
ORDER BY [Project1].[CreatedOn] DESC
How can I prevent this?
At first I was sure it was because you set the default initializer in the ctor but investigating a bit I found that the initializer isn't run when the context is created but rather when you query/add something for the first time.
The provided initializer all check model compability so you are out of luck with them. You can easily make your own initializer like this instead though:
public class Initializer : IDatabaseInitializer<Context>
{
public void InitializeDatabase(Context context)
{
if (!context.Database.Exists())
{
context.Database.Create();
Seed(context);
context.SaveChanges();
}
}
private void Seed(Context context)
{
throw new NotImplementedException();
}
}
That shouldn't check compability and if the Database is missing it will create it.
UPDATE: "Context" should be the type of your implementation of DbContext
Pass null to System.Data.Entity.Database's
public static void SetInitializer<TContext>(
IDatabaseInitializer<TContext> strategy
)
where TContext : DbContext
to disable initialization for your context. Don't implement IDatabaseInitializer to disable it.
https://msdn.microsoft.com/en-us/library/system.data.entity.database.setinitializer(v=vs.113).aspx

Resources