getting Blob from DB throws "invalid transaction handle" - spring-mvc

I have a problem with fetching results from a database. I'm using firebird, c3p0, JDBCTemplate, SpringMVC.
public class InvoiceDaoImpl implements InvoiceDao {
...
public Invoice getInvoice(int id) {
List<Invoice> invoice = new ArrayList<Invoice>();
String sql = "SELECT ID,FILENAME, FILEBODY FROM T_FILES WHERE id=" + id;
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
invoice = jdbcTemplate.query(sql, new InvoiceRowMapper());
return invoice.get(0);
}....}
Used Model:
public class Invoice {
private int ID;
private Blob FILEBODY;
private String FILENAME;
getters and setters ...
}
RowMapper and Extractor are standard.
In the JSP I'm getting a FileStream and return as file for download:
#RequestMapping("admin/file/GetFile/{id}")
public void invoiceGetFile(#PathVariable("id") Integer id, HttpServletResponse response) {
Invoice invoice = invoiceService.getInvoice(id);
try {
response.setHeader("Content-Disposition", "inline;filename=\"" + invoice.getFILENAME() + "\"");
OutputStream out = response.getOutputStream();
response.setContentType("application/x-ms-excel");
IOUtils.copy(invoice.getFILEBODY().getBinaryStream(), out);
out.flush();
out.close();
} catch (IOException e) {
e.printStackTrace();
} catch (SQLException e) {
e.printStackTrace();
}
}
catalina.out:
datasource.DataSourceTransactionManager - Releasing JDBC Connection [com.mchange.v2.c3p0.impl.NewProxyConnection#566b1836] after transaction
http-nio-8443-exec-29 DEBUG datasource.DataSourceUtils - Returning JDBC Connection to DataSource
http-nio-8443-exec-29 DEBUG resourcepool.BasicResourcePool - trace com.mchange.v2.resourcepool.BasicResourcePool#4d2dbc65 [managed: 2, unused: 1, excluded: 0] (e.g. com.mchange.v2.c3p0.impl.NewPooledConnection#4ca5c225)
org.firebirdsql.jdbc.FBSQLException: GDS Exception. 335544332.
**invalid transaction handle (expecting explicit transaction start)**
at org.firebirdsql.jdbc.FBBlobInputStream.<init>(FBBlobInputStream.java:38)
at org.firebirdsql.jdbc.FBBlob.getBinaryStream(FBBlob.java:404)
I don't understand why do I need to use transactions handling when I use SELECT, and not UPDATE or INSERT?

Firebird (and JDBC for that matter) does everything in a transaction, because the transaction determines the visibility of data.
In this specific case the select query was executed within a transaction (presumably an auto-commit), but the blob access is done after the transaction has been committed.
This triggers this specific exception because Jaybird knows it requires a transaction to retrieve the blob, but even if Jaybird had a new transaction accessing the blob wouldn't work as the blob handle is only valid inside the transaction that queried for the blob handle.
You will either need to disable auto commit and only commit after retrieving the blob (which would require extensive changes to your DAO by the looks of it), or your row mapper needs to explicitly load the blob (for example into a byte array).
Another option is to ensure this query is executed using a holdable result set (in which case Jaybird will materialize the blob into a byte array within the Blob instance for you), but I am unsure if JdbcTemplate allows you to specify use of holdable result sets.

Related

Spring Kafka DefaultErrorHandler infinite loop

Trying to create a non retry-able listener, on deserializataion error, it should just print the suspect message as string and move on.
However, when explicitly setting the DefaultErrorHandler (in an effort to see msg/payload body), it goes into a retry loop. Without setting it, it just prints the exception msg (expected string but got null) and moves on.
I've tried setting Backoff with 0 retries with no luck. Also I'm still unable to see the contents of the suspect message.
#Bean
public ConcurrentKafkaListenerContainerFactory<String, GenericRecord> kafkaListenerCAFMessageContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, GenericRecord> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(getCafKafkaConfig());
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL);
factory.setConcurrency(consumerConcurrency);
DefaultErrorHandler defaultErrorHandler = new DefaultErrorHandler((record, exception) -> {
// trying to print the payload that doesn't serialize
LOGGER.error(exception.getMessage() + record.value().toString()); // but record.value() is always null
});
#org.springframework.kafka.annotation.KafkaListener(topics = "gcrs_process_events", containerFactory = "kafkaListenerCAFMessageContainerFactory")
public void listenCafMsgs(ConsumerRecord<String, GenericRecord> record, Acknowledgment ack) {...}
The value has to be null so that the deserializer is type safe - for example, if you are using the JsonDeserializer to convert byte[] to a Foo - it is not type safe if it returns a record with a byte[] payload.
See the cause of the ListenerExecutionFailedException - it is a DeserializationException which has the data field.
/**
* Get the data that failed deserialization (value or key).
* #return the data.
*/
public byte[] getData() {
return this.data; // NOSONAR array reference
}

What is causing my DbUpdateConcurrencyException?

In my .NET Core Web API, I have implemented the transactional outbox pattern to monitor a database table and publish messages to an Azure Service Bus topic whenever a record appears in the database table. This takes place within a hosted service class that inherits from Microsoft.Extensions.Hosting.BackgroundService. This is a stripped-down version of what I have:
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
try
{
IEnumerable<RelayMessage> messagesToSend = new List<RelayMessage>();
// _scopeFactory is an implementation of Microsoft.Extensions.DependencyInjection.IServiceScopeFactory:
using (var scope = _scopeFactory.CreateScope())
{
var dbContext = scope.ServiceProvider.GetRequiredService<MyDbContext>();
while (!stoppingToken.IsCancellationRequested)
{
messagesToSend = await dbContext.RelayMessage.ToListAsync();
foreach (var message in messagesToSend)
{
try
{
await SendMessageToAzureServiceBus(message);
dbContext.RelayMessage.Remove(message);
dbContext.SaveChanges();
}
catch (Exception ex)
{
Log.Error(ex, $"Could not send message with id {message.RelayMessageId}.");
}
}
await Task.Delay(5000, stoppingToken);
}
}
await Task.CompletedTask;
}
catch (Exception ex)
{
Log.Error(ex, "Exception thrown while processing messages.");
}
The records are being deleted from the database, but the following exception gets thrown on the call to SaveChanges():
Microsoft.EntityFrameworkCore.DbUpdateConcurrencyException: Database operation expected to affect 1 row(s) but actually affected 0 row(s). Data may have been modified or deleted since entities were loaded. See http://go.microsoft.com/fwlink/?LinkId=527962 for information on understanding and handling optimistic concurrency exceptions.
at Microsoft.EntityFrameworkCore.Update.AffectedCountModificationCommandBatch.ThrowAggregateUpdateConcurrencyException(Int32 commandIndex, Int32 expectedRowsAffected, Int32 rowsAffected)
at Microsoft.EntityFrameworkCore.Update.AffectedCountModificationCommandBatch.ConsumeResultSetWithoutPropagation(Int32 commandIndex, RelationalDataReader reader)
at Microsoft.EntityFrameworkCore.Update.AffectedCountModificationCommandBatch.Consume(RelationalDataReader reader)
at Microsoft.EntityFrameworkCore.Update.ReaderModificationCommandBatch.Execute(IRelationalConnection connection)
at Microsoft.EntityFrameworkCore.Update.Internal.BatchExecutor.Execute(IEnumerable`1 commandBatches, IRelationalConnection connection)
at Microsoft.EntityFrameworkCore.Storage.RelationalDatabase.SaveChanges(IList`1 entries)
at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChanges(IList`1 entriesToSave)
at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChanges(DbContext _, Boolean acceptAllChangesOnSuccess)
at Microsoft.EntityFrameworkCore.SqlServer.Storage.Internal.SqlServerExecutionStrategy.Execute[TState,TResult](TState state, Func`3 operation, Func`3 verifySucceeded)
at Microsoft.EntityFrameworkCore.ChangeTracking.Internal.StateManager.SaveChanges(Boolean acceptAllChangesOnSuccess)
at Microsoft.EntityFrameworkCore.DbContext.SaveChanges(Boolean acceptAllChangesOnSuccess)
at Microsoft.EntityFrameworkCore.DbContext.SaveChanges()
at ReinsuranceReferenceSystemApi.Services.ServiceBus.ParticipantPublishingService.ExecuteAsync(CancellationToken stoppingToken)
I did check out the link in the exception message, but am not sure if the information applies to my situation. The RelayMessage instance is created and saved to the database (in a method not shown here), then this method reads it and deletes it. There aren't any modifications of this type anywhere in the application, so I'm unclear on how this could be a concurrency issue.
I'd appreciate any help.
EDIT:
Here's the registration of my DbContext in Startup.cs:
services.AddDbContext<MyDbContext>(o =>
{
o.UseSqlServer(Configuration.GetConnectionString("MyConnectionString"));
});

UWP Sqlite throws database is locked exception

When I execute the following code
public async Task<ObservableCollection<CommentModel>> GetTypeWiseComment(int refId, int commentType)
{
try
{
var conn = _dbOperations.GetSyncConnection(DbConnectionType.UserDbConnetion);
var sqlCommand = new SQLiteCommand(conn)
{
CommandText = "bit complex sqlite query"
};
List<CommentModel> commentList = null;
Task commentListTask =
Task.Factory.StartNew(() => commentList = sqlCommand.ExecuteQuery<CommentModel>().ToList());
await commentListTask;
var commentsList = new ObservableCollection<CommentModel>(commentList);
return commentsList;
}
catch (Exception)
{
throw;
}
finally
{
GC.Collect();
}
}
Sometimes I get the following exception
Message: database is locked
InnerException: N/A
StackTrace: at SQLite.SQLite3.Prepare2(IntPtr db, String query)
at SQLite.SQLiteCommand.Prepare()
at SQLite.SQLiteCommand.<ExecuteDeferredQuery>d__12<com.IronOne.BoardPACWinAppBO.Meeting.MeetingModel>.MoveNext()
at System.Collections.Generic.List<System.Diagnostics.Tracing.FieldMetadata>..ctor(Collections.Generic.IEnumerable<System.Diagnostics.Tracing.FieldMetadata> collection)
at BoardPACWinApp!<BaseAddress>+0xaa36ca
at com.IronOne.BoardPACWinAppDAO.Comments.CommentsDAO.<>c__DisplayClass4_0.<GetCommentTypeWiseComment>b__0()
at SharedLibrary!<BaseAddress>+0x38ec7b
at SharedLibrary!<BaseAddress>+0x4978cc
Can anyone point out what's wrong with my code?
There is another sync process going on the background and sometimes it has a bulk of records which may take more than 10 seconds to execute. If this above code happens to execute at the same time as the sync writes to the DB, it might block the reads, right?
If so how do I read from SQLite while another process writes to the DB?
Thank you.
as #Mark Benningfield mentioned enabling WAL mode almost solved my problem. However, there was another issue that creates a lot of SQLite connections on my app so I solved that by creating a Singleton module which handles database connections.
Please comment and ask if you require more information if you encounter a similar issue. Thanks.

ExecuteStoredProcedureAsync running it across partition keys

I need to have one stored procedure which I need to run on different partition keys. My collection is partitioned on one key entityname and I want to execute the stored procecure on each entity in the partition.
sproc = await client.CreateStoredProcedureAsync(collectionLink, sproc,
new RequestOptions { PartitionKey = new PartitionKey(partitionkey) });
StoredProcedureResponse<int> scriptResult = await client.ExecuteStoredProcedureAsync<int>(
sproc.SelfLink,
new RequestOptions { PartitionKey = new PartitionKey(partitionkey) },
args);
I get the following exception:
Requests originating from scripts cannot reference partition keys other than the one for which client request was submitted
Is it necessary to create a stored procedure in each partition based on key?
Is it possible to have one stored procedure which can execute for all keys?
When stored procedure is executed from client, RequestOptions specify the partition key, stored procedure will run in context of this partition and cannot operate (e.g. create) on docs that have different partition key value.
What you can do is to execute the sproc from client for each partition key. For instance, if sproc is to bulk-create documents, you can group the docs by partition key and send each group (can be done in parallel) to the sproc providing partition key value in RequestOptions. Would that be helpful?
You don't have to create sproc for each partition key, just create once without providing partition key.
I have developed the above in Java.
There are differences in how we implement while using Java SDK and stored procedure.
Please note the use of 'string' and the need to separate records based on partition key.
Bulk Import Store Procedure used: https://learn.microsoft.com/en-us/azure/documentdb/documentdb-programming
Below is the client calling the store procedure
public void CallStoredProcedure(Map <String, List<String>> finalMap)
{
for ( String key : finalMap.keySet() ) {
try
{
ExecuteStoredProcedure(finalMap.get(key),key);
}
catch(Exception ex)
{
LOG.info(ex.getMessage());
}
}
}
public void ExecuteStoredProcedure(List<String> documents, String partitionKey)
{
try {
if (documentClient == null) {
documentClient = new DocumentClient(documentDBHostUrl, documentDBAccessKey, null, ConsistencyLevel.Session);
}
options = new RequestOptions();
options.setPartitionKey(new PartitionKey(partitionKey));
Object[] sprocsParams = new Object[2] ;
sprocsParams[0] = documents;
sprocsParams[1] = null;
StoredProcedureResponse rs = documentClient.executeStoredProcedure(selflink, options, sprocsParams);
} catch (Exception ex) {
}
}

ASP.NET NHibernate transaction duration

Currently in our ASP.NET app we have 1 session per Request, and create one transaction every time we load or update and object. See below:
public static T FindById<T>(object id)
{
ISession session = NHibernateHelper.GetCurrentSession();
ITransaction tx = session.BeginTransaction();
try
{
obj = session.Get<T>(id);
tx.Commit();
}
catch
{
session.Close();
throw;
}
finally
{
tx.Dispose();
}
return obj;
}
public virtual void Save()
{
ISession session = NHibernateHelper.GetCurrentSession();
ITransaction transaction = session.BeginTransaction();
try
{
if (!IsPersisted)
{
session.Save(this);
}
else
{
session.SaveOrUpdateCopy(this);
}
transaction.Commit();
}
catch (HibernateException)
{
if (transaction != null)
{
transaction.Rollback();
}
if (session.IsOpen)
{
session.Close();
}
throw;
}
finally
{
transaction.Dispose();
}
}
Obviously this isn't ideal as it means you create a new connection to the database every time you load or save an object, which incurs performance overhead.
Questions:
If an entity is already loaded in the
1st level cache will the
GetTransaction() call open a database
connection? I suspect it will...
Is there a better way to handle our transaction management so
there are less transactions and therefore
less database connections?
Unfortunately the app code is probably too mature to structure everything like so (with the get and update all in the same transaction):
using(var session = sessionFactory.OpenSession())
using(var tx = session.BeginTransaction())
{
var post = session.Get<Post>(1);
// do something with post
tx.Commit();
}
Would it be a terrible idea to create one transaction per Request and commit it at the end of the request? I guess the downside is that it ties up one database connection while non-database operations take place.
One transaction Per Request is concidered as best practice with NHibernate. This pattern is implemented in Sharp Architecture.
But in Nhibernate method BeginTransaction() doest open connection to DB. Connection is opened at first real sql request and closed just after query is executed. So Nhibernate holds open connection for some seconds to perform query. You can verify it by SQL Profiler.
Additionally NHiberante always try to use Sql Servers connection pool and that why opening your connection may be not so expensive.
Would it be a terrible idea to create one transaction per Request and commit it at the end of the request
It wouldn't be terrible but I think it's a poor practice. If there is an error and the transaction is rolled back, I would much rather handle it on the page then at the end of the request. I prefer to use one session per request with as many transactions as I need during the request (typically one).
NHibernate is very conscientious about managing its database connections, you don't need to worry about it in most cases.
I don't like your transaction logic, especially since you kill the session if the transaction fails. And I'm not sure why you're calling SaveOrUpdateCopy. NHibernate will detect if the object needs to be persisted so the IsPersisted check is probably not needed. I use this pattern:
using (var txn = session.BeginTransaction())
{
try
{
session.SaveOrUpdate(this);
txn.Commit();
}
catch (Exception ex)
{
txn.Rollback();
// log
// handle, wrap, or throw
}
}

Resources