I have a nameko service that deals with lots of entities, and having the entrypoints in a single service.py module would render the module highly unreadable and harder to maintain.
So I've decided to split up the module in to multiple Services which are then used to extend the main service. I am kind of worried about dependency injection and thought a dependency like the db, might have multiple instances due to this approach. This is what I have so far:
the customer service module with all customer-related endpoints
# app/customer/service.py
class HTTPCustomerService:
"""HTTP endpoints for customer module"""
name = "http_customer_service"
db = None
context = None
dispatch = None
#http("GET,POST", "/customers")
def customer_listing(self, request):
session = self.db.get_session()
return CustomerListController.as_view(session, request)
#http("GET,PUT,DELETE", "/customers/<uuid:pk>")
def customer_detail(self, request, pk):
session = self.db.get_session()
return CustomerDetailController.as_view(session, request, pk)
and main service module that inherits from customer service, and possibly other abstract services
# app/service.py
class HTTPSalesService(HTTPCustomerService):
"""Nameko http service."""
name = "http_sales_service"
db = Database(Base)
context = ContextData()
dispatch = EventDispatcher()
and finally I run it with:
nameko run app.service
So this works well, but is the approach right? Especially with regards to dependency injection?
Yep, this approach works well.
Nameko doesn't introspect the service class until run-time, so it sees whatever standard Python class inheritance produces.
One thing to note is that your base class is not "abstract" -- if you point nameko run at app/customer/service.py it will attempt to run it. Related, if you put your "concrete" subclass in the same module, nameko run will try to run both of them. You can mitigate this by specifying the service class, i.e. nameko run app.services:HTTPSalesService
Related
In a nutshell I'm in the process of upgrading a .NETStandard 2.1 app to .NET 6. Plus upgrading the various libraries accordingly, in particular MassTransit v5 to v8, and AutoFac 4.9.4 to 6.4.0.
This is a multi-tenant application where one instance is shared by multiple tenants, and each tenant has their own database.
The upgrade has gone well apart from one snag. The application uses the, no longer available, AutofacReceivedEndpointExtensions to setup the Tenant details in the Consumer, and I am struggling to find a way to replicate the functionally it provides.
Below is the key bit of code.
config.ReceiveEndpoint(host, azureBusConfig.QueueName, endpoint =>
{
ConfigureConsumer<MyConsumer>(endpoint, componentContext);
});
private static void ConfigureConsumer<TConsumer>(IServiceBusReceiveEndpointConfigurator endpoint, IComponentContext componentContext, Action<IConsumerConfigurator<TConsumer>> configure = null)
where TConsumer : class, IConsumer
{
endpoint.Consumer(componentContext, configure, configureScope: (container, context) =>
{
var tenantName = context.Headers.Get<string>("tenant");
var userId = context.Headers.Get<int>("userId");
container.RegisterInstance(new NamedTenantInfoProvider(tenantName, userId)).As<ITenantInfoProvider>();
});
}
The endpoint.Consumer method as shown is no longer provided.
The ITenantInfoProvider interface is injected into various constructors in the application e.g., to setup the dbContext for a tenant to point to the correct database.
public interface ITenantInfoProvider
{
string GetTenantName();
int? GetUserId();
}
There are two implementations of the ITenantInfoProvider. The NamedTenantInfoProvider which is used to set the Tenant from the received message, above.
There is also a RequestTenantInfoProvider, that gets the Tenant from the HttpRequest. e.g. via api call.
The RequestTenantInfoProvider is registered as follows
builder.RegisterType<RequestTenantInfoProvider>()
.As<ITenantInfoProvider>()
.InstancePerLifetimeScope();
So, what should happen is that the RequestTenantInfoProvider is injected into the constructors by default, but when a message is being consumed the NamedTenantInfoProvider instance is injected instead.
I have tried to register the NamedTenantInfoProvider as per the RequestTenantInfoProvider. Then inject an IEnumerable into the constructors. And set the Tenant in the consumer.ConfigureConsumer on the Named instance. Then use which ever instance has a Tenant set in the code. However, the NamedTenantInfoProvider instance is set after it is required in the other constructors e.g., dbContext.
The only way I can get the application to fully work is to hardcode a Tenant name in the NamedTenentInfoProvider class.
I was hoping that someone has already refactored some similiar code to replace the endpoint.ConfigureConsumer call and can advise a solution.
It may be that I'm missing a bit of knowledge regarding how scoping works with the Microsoft Dependency Injection/MassTransit configuration. Note: I didn't write the original application, and this is my first dabble with Mass Transit as well.
MassTransit v8 (and onward) only support IServiceCollection, which is part of Microsoft.Extensions.DependencyInjection. Third-party containers are no longer directly supported.
There is a Scoped Filter sample that might help you understand how scopes work with MSDI. The token concept is similar to that used by developers injecting "tenant info" into consumers.
Is there a way to create an IDbContext interface for DI (using AutoFac)?
i am using asp.net mvc 5 & EF 6.
and i would like to create an Interface for Dependency Injection.
There is some way to do it?
Currently i am register my Context class and it will work fine
builder.RegisterType<CustomContext>().SingleInstance().InstancePerLifeTimeScope();
DbContext's should be short-lived. A common pattern for working with EF Contexts is the Unit of Work pattern. There are a few out there for EF that can help manage the scope of a DbContext. At worst for ASP.Net you would want the DbContext lifetime set to the Request, no longer.
Option 1: (Recommended) Register a unit of work implementation (Such as Medhime's DbContextScope) and let that manage the DbContext scope. These follow the Interface/Concrete definitions to work well with DI.
Option 2: Register a DbContextFactory and use that to provide DbContexts. I.e.
using (var context = ContextFactory.Create())
{
// ....
}
Where ContextFactory is a defined DbContextFactory class implementing an IDbContextFactory interface.
Option 3: Register the DbContext itself as a PerRequest lifetime scope.
If your goal is to inject DbContexts to facilitate testing, I would highly recommend adopting a Repository pattern (Not a Generic Repository pattern I.e. Repository<TEntity>) and utilizing either Option 1 or Option 2. The advantage of a repository is that it serves as a boundary for the unit tests. Your "code under test" can then be served a Mocked repository class which in turn returns stubbed entities or IEnumerable<TEntity>/IQueryable<TEntity>. Mocking DbContexts and their DbSets is honestly a PITA. Repository methods can be tested if & as desired using integration-style tests talking to a real database.
I have a user aggregate which is created using CreateUser command which consists of aggregate identifier and username.
Along with that i have domain service that communicates with mongo db and checks if username exists, if not it puts it there.
eg registerUsername(username) -> true / false whether it registered it or not
My question is, would it be good idea to create command handler on top of the user aggregate that would handle the CreateUser command and whether it has username or not will dispatch proper commands/events? like so:
#Component
class UserCommandHandler(
#Autowired
private val repository: Repository<User>,
#Autowired
private val eventBus: EventBus,
#Autowired
private val service: UniqueUserService
) {
#CommandHandler
fun createUser(cmd: CreateUser) {
if (this.service.registerUsername(cmd.username)) {
this.repository.newInstance { User(cmd.id) }
.handle(GenericCommandMessage(cmd))
} else {
this.eventBus.publishEvent(UserCreateFailed(cmd.id, cmd.username))
}
}
}
This question is not necessarily related to the set uniqueness in ddd but more of a question where should i put dependency of domain services? I could probably create user registration saga and inject that service inside saga but i think saga should only rely on command dispatching and not have any if/else logic.
I think the place to put your domain service depends on the use case at hand.
I typically try to have domain service do virtual no outbound calls to other services or databases, at all.
The domain service you're now conceiving however does exactly that to, like you're point out, solve the uniqueness issue.
In this situation, you could likely come by with the suggested approach.
You could also think of introducing a MessageHandlerInterceptor (or even fancier, a HandlerEnhancerDefinition as described here), specifically triggering on the create command and performing the desired check.
If it would be domain service like I depicted mine just now (e.g. zero outbound calls from domain service), then you can safely wire it in your command handling functions to perform some action.
If you're in a Spring environment, simply having your domain service as a bean and providing it as a parameter to your message handling function is sufficient for Axon to resolve it for you (through the means of ParameterResolvers, as described here).
Hope this helps you out #PolishCivil!
I can't find the answer to this...
If I inject the service container, like:
// config.yml
my_listener:
class: MyListener
arguments: [#service_container]
my_service:
class: MyService
// MyListener.php
class MyListener
{
protected $container;
public function __construct(ContainerInterface $container)
{
$this->container = $container;
}
public function myFunction()
{
$my_service = $this->container->get('my_service');
$my_service->doSomething();
}
}
then it works just as well as if I do:
// config.yml
my_listener:
class: MyListener
arguments: [#my_service]
my_service:
class: MyService
// MyListener.php
class MyListener
{
protected $my_service;
public function __construct(MyService $my_service)
{
$this->my_service = $my_service;
}
public function myFunction()
{
$this->my_service->doSomething();
}
}
So why shouldn't I just inject the service container, and get the services from that inside my class?
My list of reasons why you should prefer injecting services:
Your class is dependent only on the services it needs, not the service container. This means the service can be used in an environment which is not using the Symfony service container. For example, you can turn your service into a library that can be used in Laravel, Phalcon, etc - your class has no idea how the dependencies are being injected.
By defining dependencies at the configuration level, you can use the configuration dumper to know which services are using which other services. For example, by injecting #mailer, then it's quite easy to work out from the service container where the mailer has been injected. On the other hand, if you do $container->get('mailer'), then pretty much the only way to find out where the mailer is being used is to do a find.
You'll be notified about missing dependencies when the container is compiled, instead of at runtime. For example, imagine you have defined a service, which you are injecting into a listener. A few months later, you accidentally delete the service configuration. If you are injecting the service, you'll be notified as soon as you clear the cache. If you inject the service container, you'll only discover the error when the listener fails because of the container cannot get the service. Sure, you could pick this up if you have thorough integration testing, but ... you have got thorough integration testing, haven't you? ;)
You'll know sooner if you are injecting the wrong service. For example, if you have:
public function __construct(MyService $my_service)
{
$this->my_service = $my_service;
}
But you've defined the listener as:
my_listener:
class: Whatever
arguments: [#my_other_service]
When the listener receives MyOtherService, then PHP will throw an error, telling you that it's receiving the wrong class. If you're doing $container->get('my_service') you are assuming that the container is returning the right class, and it can take a long time to figure out that its' not.
If you're using an IDE, then type hinting adds a whole load of extra help. If you're using $service = $container->get('service'); then your IDE has no idea what $service is. If you inject with
public function __construct(MyService $my_service)
{
$this->my_service = $my_service;
}
then your IDE knows that $this->my_service is an instance of MyService, and can offer help with method names, parameters, documentation, etc.
Your code is easier to read. All your dependencies are defined right there at the top of the class. If they are scattered throughout the class with $container->get('service') then it can be a lot harder to figure out.
Your code is easier to unit test. If you're injecting the service container, you've got to mock the service container, and configure the mock to return mocks of the relevant services. By injecting the services directly, you just mock the services and inject them - you skip a whole layer of complication.
Don't be fooled by the "it allows lazy loading" fallacy. You can configure lazy loading at configuration level, just by marking the service as lazy: true.
Personally, the only time injecting the service container was the best possible solution was when I was trying to inject the security context into a doctrine listener. This was throwing a circular reference exception, because the users were stored in the database. The result was that doctrine and the security context were dependent on each other at compile time. By injecting the service container, I was able to get round the circular dependency. However, this can be a code smell, and there are ways round it (for example, by using the event dispatcher), but I admit the added complication can outweigh the benefits.
Besides all disadvantages explained by others (no control over used services, run time compilation, missing dependencies, etc.)
There is one main reason, which breaks the main advantage of using DIC - Dependencies replacement.
If service is defined in library, you wont be able to replace it dependencies with local ones filling your needs.
Only this reason is strong enough, to not inject whole DIC. You just break whole idea of replacing dependencies since they are HARDCODED! in service;)
BTW. Don't forget to require interfaces in service constructor instead of specific classes as much as you can - again nice dependencies replacement.
EDIT: Dependencies replacement example
Service definition in some vendor:
<service id='vendor_service' class="My\VendorBundle\SomeClass" />
<argument type="service" id="vendor_dependency" />
</service>
Replacement in your app:
<service id='vendor_service' class="My\VendorBundle\SomeClass" />
<argument type="service" id="app_dependency" />
</service>
This allows you to replace vendor logic with your customized one, but don't forget to implement required class interface. With hardcoded dependencies you're not able to replace dependency in one place.
You can also override vendor_dependency service, but this will replace it in all places not only in vendor_service.
It is not a good idea because you're making your class dependent on the DI. What happens when some day you decide to pull out your class and use it on an entirely different project? Now I'm not talking about Symfony or even PHP, I'm talking generally. So in that case you have to make sure the new project uses the same kind of DI mechanism with the same methods supported or you get exceptions. And what happens if the project does not use DI at all, or uses some cool new DI implementation? You have to go through your whole codebase and change things to support the new DI. In large projects this can be problematic and costly, especially when you're pulling more than just one class.
It is best to make your classes as independent as possible. This means keeping the DI out of your usual code, something like a 3rd person who decides what goes where, points where the stuff should go, but doesn't go there and do it himself. This is how I understand it.
Although, as tomazahlin said, I agree that in Symfony projects in rare occasion it helps prevent circular dependencies. That's the only example where I'd use it and I'd make damn sure that's the only option.
Injecting the whole container is not a good idea in general. Well, it works, but why injecting the whole container, while you only need a few other services or parameters.
Sometimes you want to inject the whole container to avoid circular references, because if you inject the whole container, you get "lazy loading" of the services you require. An example would be doctrine entity listeners.
You can get the container from every class that is "Container Aware" or has access to the kernel.
Is there a way to access the configuration parameters in config.yml from the model layer? From the controller I can use $this->container->getParameter('xyz'). But how can it be done from a class in the Model layer?
In symfony2 Entities are designed as POPOs, meaning that they shouldn't really have access to anything outside of their scope.
If you need some config option in one of your entities, consider passing it as a parameter from the controller like so:
$entityName->methodName($param1, $this->container->getParameter('xyz'));
This could (will) break DIC pattern, but you could use a singleton class to "globalize" what you need.
To feed your globals, use bootmethod from Bundle class (where you can access DIC stuff hence configuration).
Or more simple, add a static field to your Entity.
Quick & dirty solution, don't abuse it ;-)
You can use Dependency Injection and add your model to your services.yml file, and like every other service you make you can provide other services as constructor parameters. The only downside is you call $derp = $this->get("your_service_name"); instead of $derp = new Derp();.
For example:
# src/Derp/LolBundle/Resources/config/services.yml
services:
derp:
class: \Derp\LolBundle\Entity\Message
arguments: [#service_container]
#service_container is a service found using php app/console container:debug. It will function identically to $this->container in your controllers and it is provided to the constructor of your class. See here for more information on how to use service containers.
As previously mentioned they are POPOs (Plain Old PHP Objects) and the previous method of dependency injection is poor choice simply because you will have to remember to provide your model entity with the same object every time you use it (which is a hassle) and Symfony2 services are a way to mitigate that pain.