Does Speedment support transactions? - speedment

I have implemented the persistence layer using Speedment and I would like to
test the code using spring boot unit tests. I have annotated my unit tests with the following annotations:
#RunWith(SpringRunner.class)
#SpringBootTest
#Transactional
public class MovieServiceTest {
...
}
By default, Spring will start a new transaction surrounding each test method and #Before/#After callbacks, performing a roll back of the transaction at the end. With Speedment however this does not seem to work.
Does Speedment support transactions across several invocations, and if yes, how do I have to configure Spring to use the Speedment transactions or how doe I have to configure Speedment to use the data source provided by Spring?

Transaction support was added in Speedment 3.0.17. However, it does not integrate with the Spring #Transactional-annotation yet so you will have to wrap the code you want to execute as a single transaction like shown here:
txHandler.createAndAccept(tx ->
Account sender = accounts.stream()
.filter(Account.ID.equal(1))
.findAny()
.get();
Account receiver = accounts.stream()
.filter(Account.ID.equal(2))
.findAny()
.get();
accounts.update(sender.setBalance(sender.getBalance() - 100));
accounts.update(receiver.setBalance(receiver.getBalance() + 100));
tx.commit();
}

It is likely that you are streaming over a table and then conducts an update/remove operation while the stream is still open. Most database cannot handle having an open ResultSet on a Connection and then perform update operations on the same connection.
Luckily, there is an easy work around: consider collecting the entities you would like to modify in an intermediate Collection (such as a List or Set) and then use that Collection to perform the desired operations.
This case is described in the Speedment User's Guide here
txHandler.createAndAccept(
tx -> {
// Collect to a list before performing actions
List<Language> toDelete = languages.stream()
.filter(Language.LANGUAGE_ID.notEqual((short) 1))
.collect(toList());
// Do the actual actions
toDelete.forEach(languages.remover());
tx.commit();
}
);

AFAIK it does not (yet) - correction: it seems to setup one transaction per stream / statement.
See this article: https://dzone.com/articles/best-java-orm-frameworks-for-postgresql
But it should be possible to implement with writing a custom extension: https://github.com/speedment/speedment/wiki/Tutorial:-Writing-your-own-extensions
Edit:
According to a speedment developer one stream maps to one transaction: https://www.slideshare.net/Hazelcast/webinar-20150305-speedment-2

Related

Making blocking http call in akka stream processing

I am new to akka and still trying to understand the different akka and streaming concepts. For some new feature i need to add a http call to already existing stream which is working on an internal object. Something like this -
val step1Flow = Flow[SampleObject].filter(...--Filtering condition--...)
val step2Flow = Flow[SampleObject].map(obj => {
...
-- Business logic to update values in the obj --
...
})
...
override val flowGraph: Flow[SampleObject, SampleObject, NotUsed] =
bufferIn.via(Flow.fromGraph(GraphDSL.create() {
implicit builder =>
import GraphDSL.Implicits._
...
val step1 = builder.add(step1Flow)
val step2 = builder.add(step2Flow)
val step3 = builder.add(step3Flow)
...
source ~> step1 ~> step2 ~> step3 ~> merge
...
}
I need to add the new http request flow (lets call it newFlow) after step1. All these flow have Inlet and Outlet as SampleObject. Now my understanding is that the newFlow would need to be blocking because the outlet need to be SampleObject only. For that I have used Await function on the http call future. The code looks like this -
val responseFuture: Future[(Try[HttpResponse], SomeContext)] =
Source
.single(httpRequest -> context)
.via(Retry(retrySettings).join(clientFlow))
.runWith(Sink.head)
...
val (httpTry, passedAlongContext) = Await.result(responseFuture, 30.seconds)
-- logic to process response and return SampleObject --
Now this works fine but i think there should be a better way to do this without using wait. Also i think this would block the main thread till the request completes, which is going to affect the app throughput.
Could you please guide if the approach i used is correct or not. And how do i make use of some other thread pool to handle these blocking call so my main threadpool is not affected
This question seems very similar to mine but i do not understand it completely - connect Akka HTTP to Akka stream . Also i can't change the step2 or further flows.
EDIT : Added some code details for the stream
I ended up using the approach mentioned in the question because i couldn't find anything better after looking around. Adding this step decreased the throughput of my application as expected, but there are approaches to increase that can be used. Check these awesome blogs by Colin Breck -
https://blog.colinbreck.com/maximizing-throughput-for-akka-streams/
https://blog.colinbreck.com/partitioning-akka-streams-to-maximize-throughput/
To summarize -
Use Asynchronous Boundaries for flows which are blocking.
Use Futures if possible and add callbacks to futures. There are several ways to do that.
Use Buffers. There are several types of buffers available, choose what suits your needs.
Other than these, you can use inbuilt flows like -
Use "Broadcast" to broadcast your events to multiple consumers.
Use "Partition" to partition your stream into multiple streams based
on some condition.
Use "Balance" to partition your stream when there is no logical way to partition your events or they all could have different work loads.
You could use any one or multiple things from above options.

How to get a transaction history by specific transaction ID(txhash) in Corda

To get state I can use Vault, but what about transactions? How I can get them, for example, by txHash? Is it possible to do this by vaultService.queryBy(criteria) ?
Since internalVerifiedTransactionsSnapshot method is deprecated now, any ways to retrieve a specific transaction by using txhash as of Corda 4?
Inside of the node you can call:
serviceHub.validatedTransactions.getTransaction(hash)
Via rpc I think you can do this:
proxy.stateMachineRecordedTransactionMappingSnapshot().map { it.transactionId }.first { it == hash }
But a better solution would be to create a flow that takes in a hash, calls the first snippet and returns the transaction.

Datastore: Saving entity with successors in the same transaction with autogenerated key Ids

I'd like to run the following algorithm (it's more like javascript pseudocode)
const transaction = datastore.transaction();
await transaction.run();
const parentKey = createKey(namespace, kind) // note that I leave the ID th be generated
await transaction.save(ancestorKey, parentEntity);
const childKey = createKey(namepsace, kind, parentId, parentKind) // ???
await transaction.save (ChildKey, childEntity);
await transaction.commit();
How can I know the parentId since the initial save of parentEntity is not yet commited?
I'd like to run this into a single transaction, is this achievable?
No, this is not possible due to the datastore's transaction isolation and consistency (emphasis mine):
This consistent snapshot view also extends to reads after writes
inside transactions. Unlike with most databases, queries and gets
inside a Cloud Datastore transaction do not see the results of
previous writes inside that transaction. Specifically, if an entity is
modified or deleted within a transaction, a query or lookup returns
the original version of the entity as of the beginning of the
transaction, or nothing if the entity did not exist then.
Depending on why you actually need such sequence to be done transactionally you might be able to achieve something somehow equivalent this way:
create the parent transactionally
in the same transaction also create and transactionally enqueue a push task queue passing it the parent's key as parameter - the task will be enqueued only if/when the transaction succeeds
in the task handler (also made transactional) create the child entity - guaranteed to only happen once
Note that not all GAE environments support such scheme due to limited push task queue support.

How to get all aggregates with Axon framework?

I am starting out with the Axon framework and hit a bit of a roadblock.
While I can load individual aggregates using their ID, I can’t figure out how to get a list of all aggregates, or a list of all aggregate IDs.
The EventSourcingRepository class only has load() methods that return one aggregate.
Is there a way to all aggregate (IDs) or am I supposed to keep a list of all aggregate IDs outside of axon?
To keep things simple I am only using an InMemoryEventStorageEngine for now.
I am using Axon 3.0.7.
First off I'm was wondering why you would want to retrieve a complete list of all the aggregates from the Repository.
The Repository interface is set such that you can load an Aggregate to handle commands or to create a new Aggregate.
Asking the question you have, I'd almost guess you're using it for querying purposes rather than command handling.
This however isn't the intended use for the EventSourcingRepository.
One reason you'd want this I can think about, is that you want to implement an API call to publish a command to all Aggregates of a specific type in your application.
Taking that scenario then yes, you need to store the aggregateId references yourself.
But concluding with my earlier question: why do you want to retrieve a list of aggregates through the Repository interface?
Answer Update
Regarding your comment, I've added the following to my answer:
Axon helps you to set up your application with event sourcing in mind, but also with CQRS (Command Query Responsibility Segregation).
That thus means that the command and query side of your application are pulled apart.
The Aggregate Repository is the command side of your application, where you request to perform actions.
It thus does not provide a list-of-aggregates, as a command is an expression of intent on a aggregate. Hence it only requires the Repository user to retrieve one aggregate or create one.
The example you've got that you need of the list of Aggregates is the query side of your application.
The query side (your views/entities) is typically updated based on events (sourced through events).
For any query requirement you have in your application, you'd typically introduce a separate view tailored to your needs.
In your example, that means you'd introduce a Event Handling Component, listening to your Aggregate Events, which update a Repository with query models of your aggregate.
The EventStore passed into EventSourcingRepository implements StreamableMessageSource<M extends Message<?>> which is a means of reaching in for the aggregates.
Whilst doing it the framework way with an event handling component will probably scale better (depending on the how its used / the context), I'm pretty sure the event handling components are driven by StreamableMessageSource<M extends Message<?>> anyway. So if we wanted to skip the framework and just reach in, we could do it like this:
List<String> aggregates(StreamableMessageSource<Message<?>> eventStore) {
return immediatelyAvailableStream(eventStore.openStream(
eventStore.createTailToken() /* All events in the event store */
))
.filter(e -> e instanceof DomainEventMessage)
.map(e -> (DomainEventMessage) e)
.map(DomainEventMessage::getAggregateIdentifier)
.distinct()
.collect(Collectors.toList());
}
/*
Note that the stream returned by BlockingStream.asStream() will block / won't terminate
as it waits for future elements.
*/
static <M> Stream<M> immediatelyAvailableStream(final BlockingStream<M> messageStream) {
Iterator<M> iterator = new Iterator<M>() {
#Override
public boolean hasNext() {
return messageStream.hasNextAvailable();
}
#Override
public M next() {
try {
return messageStream.nextAvailable();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new IllegalStateException("Didn't expect to be interrupted");
}
}
};
Spliterator<M> spliterator = Spliterators.spliteratorUnknownSize(iterator, Spliterator.ORDERED);
Stream stream = StreamSupport.stream(spliterator, false);
return (Stream)stream.onClose(messageStream::close);
}

Grails 3.3 Multiple Asynchronous GORM calls during integration test without access to the database.

I was writing integration tests in Grails 3.3 with multiple Asynchronous GORM calls when I realized I could not get access to values stored in the database. I wrote the following test to understand what is happening.
void "test something"() {
given:
def instance = new ExampleDomain(aStringField: "testval").save(flush:true)
when:
def promise = ExampleDomain.async.task {
ExampleDomain.get(instance.id).aStringField
}
then:
promise.get() == "testval"
}
My domain class
class ExampleDomain implements AsyncEntity<ExampleDomain> {
String aStringField
static constraints = {}
}
build.gradle configuration
compile "org.grails:grails-datastore-gorm-async:6.1.6.RELEASE"
Any idea what is going wrong? I'm expecting to have access to the datastore during the execution of the async call.
Most likely the given block is in a transaction that hasn't committed. Without seeing the full test class it is impossible to know, however it is likely you have the #Rollback annotation.
The fix is to remove the annotation and put the logic for saving the domain in a separate transactional method. You will then be responsible for cleaning up any inserted data.

Resources