How to use properly EntityRepository instances? - mikro-orm

The documentation stresses that I should use a new EntityManager for each request and there's even a middleware for automatically generating it or alternatively I can use em.fork(). So far so good.
The EntityRepository is a great way to make the code readable. I could not find anything in the documentation about how they relate to EntityManager instances. The express-ts-example-app example uses single instances of repositories and the RequestContext middleware. This suggests that there is some under-the-hood magic that finds the correct EntityManager instances at least with the RequestContext. Is it really so?
Also, if I fork the EM manually can it still find the right one? Consider the following example:
(async () => {
DI.orm = await MikroORM.init();
DI.em = DI.orm.em;
DI.companyRepository = DI.orm.em.getRepository(Company);
DI.invoiceRepository = DI.orm.em.getRepository(Invoice);
...
fetchInvoices(em.fork());
}
async function fetchInvoices(em) {
for (const company of await DI.companyRepository.findAll()) {
fetchInvoicesOfACompany(company, em.fork())
}
}
async function fetchInvoicesOfACompany(company, em) {
let done = false;
while (!done) {
const invoice = await getNextInvoice(company.taxnumber, company.lastInvoice);
if ( invoice ) {
DI.invoiceRepository.persist(invoice);
company.lastInvoice = invoice.id;
em.flush();
} else {
done = true;
}
}
}
Does the DI.invoiceRepository.persist() in fetchInvoicesOfACompany() use the right EM instance? If not, what should I do?
Also, if I'm not mistaken, the em.flush() in fetchInvoicesOfACompany() does not update company, since that belongs to another EM - how should I handle situations like this?

First of all, repository is just a thin layer on top of EM (an extension point if you want), that bares the entity name so you don't have to pass it to the first parameter of EM method (e.g. em.find(Ent, ...) vs repo.find(...).
Then the contexts - you need a dedicated context for each request, so it has its own identity map. If you use RequestContext helper, the context is created and saved via domain API. Thanks to this, all the methods that are executed inside the domain handler will use the right instance automatically - this happens in the em.getContext() method, that first checks the RequestContext helper.
https://mikro-orm.io/docs/identity-map/#requestcontext-helper-for-di-containers
Check the tests for better understanding of how it works:
https://github.com/mikro-orm/mikro-orm/blob/master/tests/RequestContext.test.ts
So if you use repositories, with RequestContext helper it will work just fine as the singleton repository instance will use the singleton EM instance that will then use the right request based instance via em.getContext() where approapriate.
But if you use manual forking instead, you are responsible use the right repository instance - each EM fork will have its own one. So in this case you can't use a singleton, you need to do forkedEm.getRepository(Ent).
Btw alternatively you can also use AsyncLocalStorage which is faster (and not deprecated), if you are on node 12+. The RequestContext helper implementation will use ALS in v5, as node 12+ will be requried.
https://mikro-orm.io/docs/async-local-storage
Another thing you could do is to use the RequestContext helper manually instead of via middlewares - something like the following:
(async () => {
DI.orm = await MikroORM.init();
DI.em = DI.orm.em;
DI.companyRepository = DI.orm.em.getRepository(Company);
DI.invoiceRepository = DI.orm.em.getRepository(Invoice);
...
await RequestContext.createAsync(DI.em, async () => {
await fetchInvoices();
})
});
async function fetchInvoices() {
for (const company of await DI.companyRepository.findAll()) {
await fetchInvoicesOfACompany(company)
}
}
async function fetchInvoicesOfACompany(company) {
let done = false;
while (!done) {
const invoice = await getNextInvoice(company.taxnumber, company.lastInvoice);
if (invoice) {
company.lastInvoice = invoice; // passing entity instance, no need to persist as `company` is managed entity and this change will be cascaded
await DI.em.flush();
} else {
done = true;
}
}
}

Related

Unity to DryIoC conversion ParameterOverride

We are transitioning from Xamarin.Forms to .Net MAUI but our project uses Prism.Unity.Forms. We have a lot of code that basically uses the IContainer.Resolve() passing in a collection of ParameterOverrides with some primitives but some are interfaces/objects. The T we are resolving is usually a registered View which may or may not be the correct way of doing this but it's what I'm working with and we are doing it in backend code (sometimes a service). What is the correct way of doing this Unity thing in DryIoC? Note these parameters are being set at runtime and may only be part of the parameters a constructor takes in (some may be from already registered dependencies).
Example of the scenario:
//Called from service into custom resolver method
var parameterOverrides = new[]
{
new ParameterOverride("productID", 8675309),
new ParameterOverride("objectWithData", IObjectWithData)
};
//Custom resolver method example
var resolverOverrides = new List<ResolverOverride>();
foreach(var parameterOverride in parameterOverrides)
{
resolverOverrides.Add(parameterOverride);
}
return _container.Resolve<T>(resolverOverrides.ToArray());
You've found out why you don't use the container outside of the resolution root. I recommend not trying to replicate this error with another container but rather fixing it - use handcoded factories:
internal class SomeFactory : IProductViewFactory
{
public SomeFactory( IService dependency )
{
_dependency = dependency ?? throw new ArgumentNullException( nameof(dependency) );
}
#region IProductViewFactory
public IProductView Create( int productID, IObjectWithData objectWithData ) => new SomeProduct( productID, objectWithData, _dependency );
#endregion
#region private
private readonly IService _dependency;
#endregion
}
See this, too:
For dependencies that are independent of the instance you're creating, inject them into the factory and store them until needed.
For dependencies that are independent of the context of creation but need to be recreated for each created instance, inject factories into the factory and store them.
For dependencies that are dependent on the context of creation, pass them into the Create method of the factory.
Also, be aware of potential subtle differences in container behaviours: Unity's ResolverOverride works for the whole call to resolve, i.e. they override parameters of dependencies, too, whatever happens to match by name. This could very well be handled very differently by DryIOC.
First, I would agree with the #haukinger answer to rethink how do you pass the runtime information into the services. The most transparent and simple way in my opinion is by passing it via parameters into the consuming methods.
Second, here is a complete example in DryIoc to solve it head-on + the live code to play with.
using System;
using DryIoc;
public class Program
{
record ParameterOverride(string Name, object Value);
record Product(int productID);
public static void Main()
{
// get container somehow,
// if you don't have an access to it directly then you may resolve it from your service provider
IContainer c = new Container();
c.Register<Product>();
var parameterOverrides = new[]
{
new ParameterOverride("productID", 8675309),
new ParameterOverride("objectWithData", "blah"),
};
var parameterRules = Parameters.Of;
foreach (var po in parameterOverrides)
{
parameterRules = parameterRules.Details((_, x) => x.Name.Equals(po.Name) ? ServiceDetails.Of(po.Value) : null);
}
c = c.With(rules => rules.With(parameters: parameterRules));
var s = c.Resolve<Product>();
Console.WriteLine(s.productID);
}
}

Creating proxies for real objects

I want to write an integration test using a real repository but also verify behavior of the repository
SomeService(IRepository r) calls r.QuerySomething()
And I've been trying to achieve this using Moq:
var mock = new Mock<IRepository >(() => new Repository());
mock.CallBase = true;
The trouble is that it never calls methods from Repository nor does it call it's constructor. The lambda over there is meant for getting ctor parameters (if type is a class) not for object initialization.
Q: How do I wrap new Repository() into a Mock<IIRepository> so I can verify calls?
NB: it works if the type give is a class but I cannot then use it for verifying since they implementatin is not virtual.
Alternatively is there some other nuget that can help me achieve this?
There is a technique that I use for testing brownfiled legacy code, it can probably help, in what you're trying to achieve. You can introduce a decorator into your tests project that wraps your original implementation, but also implements the IRepository interface.
class TestRepository : IRepostiory
{
public TestRepository(Repository next)
{
this.next = next;
}
}
Inside this class you can declare all the interface members as virtual.
class TestRepository : IRepostiory
{
public virtual IReadOnlyList<Client> GetByName(string name)
{
return this.next.GetByName(name);
}
}
Now you can use the TestRepository in place of your original implementation and also create a mock that verifies the calls to this class.
var repository = new Repository();
var sutMock = new Mock<TestRepository>(repository) { CallBase = true };
var sut = sutMock.Object;
sut.GetByName("John Doe");
sutMock.Verify(x => x.GetByName("John Doe"), Times.Once);
NB: The fact that you'd need a legacy code testing technique probably indicates to a code smell. I would recommend, as a first step, splitting the tests that assert the mock from those that assert the real implementation results (changes in the persistence layer).

GraphQL .NET: middleware problems with singleton schema

I'm using GraphQL on a .NET core website/controller. The schema is quite large, such that the constructor takes about a second to run. I couldn't have that kind of overhead on every request, so I created the schema as a singleton, shared between all requests.
public async Task<IActionResult> Post([FromBody] GraphQLQuery query)
{
var executionOptions = new ExecutionOptions {
Schema = this.Schema, // dependency injected singleton
/* ... */
};
// ...
executionOptions.FieldMiddleware.Use(next => context =>
{
return next(context).ContinueWith(x=> {
var result = x.Result;
return doStuff(result);
});
});
var result = await new DocumentExecuter().ExecuteAsync(executionOptions).ConfigureAwait(false);
// ...
}
This worked most of the time, but it caused random problems with the middleware. Sometimes the middleware would start running twice for each element, which would usually cause an error the second time the middleware ran.
Looking at the source, it appears the middleware is being applied to the schema during the life cycle of a request, and then somehow rolled back at the end I guess? At least I'm assuming that's how the public void ApplyTo(ISchema schema) member is being used, although I'm not sure how the "rollback" part was happening.
This gave me an idea of how to solve the problem by pulling the middleware out of the view and put it in the schema constructor, like this:
public class MySchema : Schema
{
public MySchema()
{
this.Query = new MyQuery();
this.Mutation = new MyMutation();
var builder = new FieldMiddlewareBuilder();
builder.Use(next => context =>
{
return next(context).ContinueWith(x=> {
var result = x.Result;
return doStuff(result);
});
});
builder.ApplyTo(this);
}
}
So now the middleware is baked directly into the schema when the singleton is constructed, and the controller doesn't have to do anything.
This appears to have completely solved the problem. I'm not sure if there are other things in graphql-dotnet that mutate the schema during the request life cycle. If anyone knows of any other problems that might occur with a singleton schema I'd love to hear it!

Facade pattern for Symfony2 services

New to Symfony2, I'm building an app that uses an external API to get data. I created a lot of client classes to retrieve and transform each entity from the API, and I defined those classes as services - e.g., I have a FooClient with methods like getAll() or getThoseThatInterestMe($me), which return data from the API.
Now I wanted to create a ApiClientFacade class, which acts as an interface in front of all the XxxClient classes, following the Facade Pattern - e.g., this facade class would have a method getAllFoo(), which in turn would call FooClient::getAll(), and so on...
I could define my facade class as a service as well, but it'd have too many dependencies - I have around 30 client classes. Also, afaik with this approach I'd be loading all 30 dependencies every time, while most of the times I'd only need one dependency...
So, is there a better way to do this?
Use additional ApiClientFactory to move responsibility about "instantiation of ApiClient objects" from your ApiFacade class (which is your initial idea, as I understood).
In some pseudo-php code my idea is:
$api = new ApiFacade(new ApiClientFactory);
$api->sendNotificationAboutUserLogin('username', time());
An example of method:
class ApiFacade {
private $apiFactory;
public function __construct(ApiClientFactory $factory)
{
$this->apiFactory = $factory;
}
public function sendNotificationAboutUserLogin($username, $timeOfOperation)
{
return $this->apiFactory
->createApi('User')
->post(
'notifications',
array('operation' => 'login', 'username' => $username, 'timeOfOperation' => $timeOfOperation)
);
}
}
In this case your Facade class stays injectable (testable), but also becomes simpler instantiatable (you don't need to pass all dependencies into it anymore).
The ApiClientFactory should look like that:
class ApiClientFactory {
private $apiBaseUrl;
public function __construct($apiBaseUrl)
{
$this->apiBaseUrl = $apiBaseUrl;
}
public function createApi($apiName)
{
switch ($apiName) {
case 'User': return new \My\UserApi($this->apiBaseUrl);
default: // throw an exception?
}
}
}

Dart, how to create a future to return in your own functions?

is it possible to create your own futures in Dart to return from your methods, or must you always return a built in future return from one of the dart async libraries methods?
I want to define a function which always returns a Future<List<Base>> whether its actually doing an async call (file read/ajax/etc) or just getting a local variable, as below:
List<Base> aListOfItems = ...;
Future<List<Base>> GetItemList(){
return new Future(aListOfItems);
}
If you need to create a future, you can use a Completer. See Completer class in the docs. Here is an example:
Future<List<Base>> GetItemList(){
var completer = new Completer<List<Base>>();
// At some time you need to complete the future:
completer.complete(new List<Base>());
return completer.future;
}
But most of the time you don't need to create a future with a completer. Like in this case:
Future<List<Base>> GetItemList(){
var completer = new Completer();
aFuture.then((a) {
// At some time you need to complete the future:
completer.complete(a);
});
return completer.future;
}
The code can become very complicated using completers. You can simply use the following instead, because then() returns a Future, too:
Future<List<Base>> GetItemList(){
return aFuture.then((a) {
// Do something..
});
}
Or an example for file io:
Future<List<String>> readCommaSeperatedList(file){
return file.readAsString().then((text) => text.split(','));
}
See this blog post for more tips.
You can simply use the Future<T>value factory constructor:
return Future<String>.value('Back to the future!');
Returning a future from your own function
This answer is a summary of the many ways you can do it.
Starting point
Your method could be anything but for the sake of these examples, let's say your method is the following:
int cubed(int a) {
return a * a * a;
}
Currently you can use your method like so:
int myCubedInt = cubed(3); // 27
However, you want your method to return a Future like this:
Future<int> myFutureCubedInt = cubed(3);
Or to be able to use it more practically like this:
int myCubedInt = await cubed(3);
The following solutions all show ways to do that.
Solution 1: Future() constructor
The most basic solution is to use the generative constructor of Future.
Future<int> cubed(int a) {
return Future(() => a * a * a);
}
I changed the return type of the method to Future<int> and then passed in the work of the old function as an anonymous function to the Future constructor.
Solution 2: Future named constructor
Futures can complete with either a value or an error. Thus if you want to specify either of these options explicitly you can use the Future.value or Future.error named constructors.
Future<int> cubed(int a) {
if (a < 0) {
return Future.error(ArgumentError("'a' must be positive."));
}
return Future.value(a * a * a);
}
Not allowing a negative value for a is a contrived example to show the use of the Future.error constructor. If there is nothing that would produce an error then you can simply use the Future.value constructor like so:
Future<int> cubed(int a) {
return Future.value(a * a * a);
}
Solution 3: async method
An async method automatically returns a Future so you can just mark the method async and change the return type like so:
Future<int> cubed(int a) async {
return a * a * a;
}
Normally you use async in combination with await, but there is nothing that says you must do that. Dart automatically converts the return value to a Future.
In the case that you are using another API that returns a Future within the body of your function, you can use await like so:
Future<int> cubed(int a) async {
return await cubedOnRemoteServer(a);
}
Or this is the same thing using the Future.then syntax:
Future<int> cubed(int a) async {
return cubedOnRemoteServer(a).then((result) => result);
}
Solution 4: Completer
Using a Completer is the most low level solution. You only need to do this if you have some complex logic that the solutions above won't cover.
import 'dart:async';
Future<int> cubed(int a) async {
final completer = Completer();
if (a < 0) {
completer.completeError(ArgumentError("'a' must be positive."));
} else {
completer.complete(a * a * a);
}
return completer.future;
}
This example is similar to the named constructor solution above. It handles errors in addition completing the future in the normal way.
A note about blocking the UI
There is nothing about using a future that guarantees you won't block the UI (that is, the main isolate). Returning a future from your function simply tells Dart to schedule the task at the end of the event queue. If that task is intensive, it will still block the UI when the event loop schedules it to run.
If you have an intensive task that you want to run on another isolate, then you must spawn a new isolate to run it on. When the task completes on the other isolate, it will return a message as a future, which you can pass on as the result of your function.
Many of the standard Dart IO classes (like File or HttpClient) have methods that delegate the work to the system and thus don't do their intensive work on your UI thread. So the futures that these methods return are safe from blocking your UI.
See also
Asynchrony support documentation
Flutter Future vs Completer
#Fox32 has the correct answer addition to that we need to mention Type of the Completer otherwise we get exception
Exception received is type 'Future<dynamic>' is not a subtype of type 'FutureOr<List<Base>>
so initialisation of completer would become
var completer= new Completer<List<Base>>();
Not exactly the answer for the given question, but sometimes we might want to await a closure:
flagImage ??= await () async {
...
final image = (await codec.getNextFrame()).image;
return image;
}();
I think it does create a future implicitly, even though we don't pass it anywhere.
Here a simple conditional Future example.
String? _data;
Future<String> load() async {
// use preloaded data
if (_data != null) return Future<String>.value(_data);
// load data only once
String data = await rootBundle.loadString('path_to_file');
_data = data;
return data;
}

Resources