RocksDb Java API support for Transactions - rocksdb

Does RocksJavaAPI have the support for transactions? I see that there is a Transaction DB class present in the JAR. I am not able to do a begin transaction on transaction Db class.
RocksDB db = TransactionDB.open(options, "/Users/jagannathan/Desktop/My Files/db/rocksdb")
I am not able to do db.beginTransaction as such methods are not available. Any pointers on how to accomplish in Java are appreciated.

You need to use a different open method. You currently use the open method of the base class (RocksDB).
Use either:
public static TransactionDB open(Options options,
TransactionDBOptions transactionDbOptions,
java.lang.String path)
or
public static TransactionDB open(DBOptions dbOptions,
TransactionDBOptions transactionDbOptions,
java.lang.String path,
java.util.List<ColumnFamilyDescriptor> columnFamilyDescriptors,
java.util.List<ColumnFamilyHandle> columnFamilyHandles)
To get a TransactionDB object. This object you can then use to call #beginTransaction, which will return an Transaction object. This transaction can then be used similar to a RocksDB, where you can put, delete etc. and commit if you're done.

Related

How to load the caching layer with data when asp.net core web api is created?

I have created a web api that handles the creation of jwt token based on the encrypted user details that it receives in a post request.
In addition to this STS api should also handle the population of the caching layer (Redis or Hazelcast) with all the user data present in the database. Presently I have registered the caching service using dependency injection.This will happen only once when the api is first initialized.
services.AddSingleton<ICacheService, RedisCacheService>();
And in the TokenController added the service as a parameter to initialize the CachingService class and thereby initialize the caching layer.So that when the cacheService object is fist initialized it fetches all the user rows from the database and stores it as a key value pair inside Redis/Hazelcast database.
public TokenController(
ICryptographyService cryptographyService,
crudDBContext crudDBContext,
IConfiguration configuration,
ICacheService cacheService)
{
_cryptographyService = cryptographyService;
_context = crudDBContext;
_config = configuration;
_cacheService = cacheService;
}
But the Token Controller constructor is initialized only when an endpoint is called, so i had to create a separate default [HttpGet] endpoint to ensure that the constructor is called when the STS api is first initialized so as to ensure that the cacheService object gets created and the data gets loaded to the cache.
public ActionResult<string> Get()
{
return "STS";
}
Please let me know if there is a proper way of doing this without calling an endpoint, like be able to use dependency injection but at the same time call some code without the endpoint being called.I need to use dependency injection because i should be able to switch between Redis and Hazelcast by just changing the classname in the startup.cs file.
With respect to Hazelcast and dependency injection: First you would need to use the sources and not the Hazelcast NuGet version. Next the configuration depends on if you are in a Container Environment or a Hosted Environment. In both cases configuration keys will be gathered from the same sources and in the same order, and options will be registered in the service container, and available via dependency injection

Achieving transaction by calling multiple procedures using Spring jdbc template

I have a scanario where i have to make multiple stored procedure calls. If any one of the stored procedures fail, i have to roll back all the procedures.
May i please know how to achieve this using spring jdbc template. What i know is that i can call only one stored procedure using the spring jdbc template.
Is there any way to invoke a group of procedures in sequence using spring jdbc template?
One way of solving this problem is to create another new stored procedure and call all the procedures in this.
Is there any other efficient way to achieve this?
Following code will call multiple stored procedures within the same transaction.
#Transactional(rollbackFor=Exception.class)
public void callStoredProcedures(){
// Stored procedure 1
//....
// Stored procedure n
}
A transaction would be initialized at the method start. All the subsequent database calls within that method will take part in this transaction and any exception within the method context would rollback the transaction.
Note that the transaction rollback for this method is configured for any Exception. By default a transaction is marked for rollback on exceptions of type RuntimeException and JdbcTemplate methods throw DataAccessException which is its subclass. If no rollback is required for checked exceptions the (rollbackFor=Exception.class) can be removed.
Also , for #Transactional annotation to work , enable transaction management. Please go through #EnableTransactionManagement

Spring cloud stream kafka transactions in producer side

We have a spring cloud stream app using Kafka. The requirement is that on the producer side the list of messages needs to be put in a topic in a transaction. There is no consumer for the messages in the same app. When i initiated the transaction using spring.cloud.stream.kafka.binder.transaction.transaction-id prefix, I am facing the error that there is no subscriber for the dispatcher and a total number of partitions obtained from the topic is less than the transaction configured. The app is not able to obtain the partitions for the topic in transaction mode. Could you please tell if I am missing anything. I will post detailed logs tomorrow.
Thanks
You need to show your code and configuration as well as the versions you are using.
Producer-only transactions are discussed in the documentation.
Enable transactions by setting spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix to a non-empty value, e.g. tx-. When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction. When the listener exits normally, the listener container will send the offset to the transaction and commit it. A common producer factory is used for all producer bindings configured using spring.cloud.stream.kafka.binder.transaction.producer.* properties; individual binding Kafka producer properties are ignored.
If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. #Scheduled method), you must get a reference to the transactional producer factory and define a KafkaTransactionManager bean using it.
#Bean
public PlatformTransactionManager transactionManager(BinderFactory binders) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
return new KafkaTransactionManager<>(pf);
}
Notice that we get a reference to the binder using the BinderFactory; use null in the first argument when there is only one binder configured. If more than one binder is configured, use the binder name to get the reference. Once we have a reference to the binder, we can obtain a reference to the ProducerFactory and create a transaction manager.
Then you would just normal Spring transaction support, e.g. TransactionTemplate or #Transactional, for example:
public static class Sender {
#Transactional
public void doInTransaction(MessageChannel output, List<String> stuffToSend) {
stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff)));
}
}
If you wish to synchronize producer-only transactions with those from some other transaction manager, use a ChainedTransactionManager.

Passing a stream or a String to Flyway API instead of locations

I was wondering if there is a way for Flyway to accept an actual SQL migration as a string or a stream instead of searching for it on a classpath?
I'm constructing the SQL migration in Java on the fly and would like to call Flyway API and pass the migration as a paramter.
Please, let me know if this is possible.
Thank you
Not entirely what you are asking for, but looks like Java-based migrations might be a solution.
Basically instead of V1_0__script.sql you write V1_0__script.java class implementing JdbcMigration. Inside that class you have access to JDBC Connection:
class V1_0__script implements JdbcMigration {
public void migrate(Connection connection) throws Exception {
//...
}
}
In migrate() you are free to run your custom SQL queries.
There is no API available for this.
However, if you construct your SQL on the fly, it surely must be possible to construct it one statement at a time. Each statement can then be executed using the Connection parameter you get in a JdbcMigration

UnitOfWork + Repository patterns and Entity Framework impersonation

I have used UnitOfWork and Repository patterns in my application with EF.
Actually my design provides that the UnitOfWork would create the ObjectContext class and inject inside the Repository concrete class. For example:
UnitOfWork.cs (initialization)
public DefaultUnitOfWork() {
if (_context == null) {
_context = new MyDataContext(ConfigSingleton.GetInstance().ConnectionString);
}
}
UnitOfWork.cs (getting a repository instance)
public CustomerRepository Customers {
get {
if (_customers == null) {
_customers = new CustomerRepository(_context);
}
return _customers;
}
}
This way the Repository classes have an already defined ObjectContext class and they can use it's methods to retrieve and update data.
This works nice.
Now I need to execute my queries impersonating the Application Pool Identity so I have decided to wrap the code in the constructor of the UnitOfWork within the impersonation.
Unfortunately this does not work because the ObjectContext is then passed to the Repository constructor and used later when a client of the repository calls, for example, FindAll().
I have experienced that the real connection to the database is made right before doing the query by Entity Framework and not exactly when I am creating the ObjectContext itself.
How can I solve this problem?
You could use one or more ObjectContext Factories (to create ObjectContexts), using different creation criteria, such as Connection String, for example. Your UnitOfWork could leverage a factory to get its Context and so could the Repository, but I think you've missed the point of UnitOfWork if it is leveraging a different ObjectContext than your Repository.
A UnitOfWork should consist of one or more operations that should be completed together, which could easily leverage multiple repositories. If the repositories have their own ObjectContexts separate from the UnitOfWork, I don't see how committing the UnitOfWork will achieve it's purpose.
I think either I've misinterpreted your question completely or you've left out some pertinent details. Good Luck!

Resources