I have a .net core web application that uses Autofac multitenancy container.
The tenant strategy resolves the tenant by looking at the path of the HTTP requests.
However, there is a specific functionality in which a tenant A needs to use configuration of another tenant B (a sub-tenant in this case); the problem is that it is not known until tenant A has already performed some logic to know which sub-tenant's configuration it needs to use.
Is there a way to obtain the service of another tenant in runtime?
I will try to clarify with an example:
What I have is more or less:
An HTTP request to GET my.host.net/A/rules
The tenant resolver is capable of identifying that current tenant is A (it is in the path, just after the host name)
The tenant resolver gets from the database the general rules, and one of them indicates that the configurations of another tenant B should be used
From here on, I would like to use the services of tenant B.
What I have tried / think about?
Save the Multitenant container and use GetTenantScope to resolve the scope of tenant B in a class factory that resolves the services to use. However, I don't know the implications in terms of memory usage and possible problems with mixing tenants
Forget about multitenancy and just save configurations per tenant in specific class.
I'm not sure what "sub-tenant" means in this context. Autofac has no notion of multi-level tenancy in its multitenant support, so while this construct may make sense in the context of your application, trying to make that work in Autofac is not going to be simple.
Trying to switch tenants mid-request is going to be a challenge at best. Things through the request pipeline (middleware, controllers, etc.) are all going to want to use the HttpContext.RequestServices which is set first thing in the request. Like, it's literally the very first middleware that runs. Once it's set, the pipeline starts resolving and controllers and other things start resolving and... it's locked into that tenant. You can't switch it.
Given that, I'd caution you about trying to resolve some things from one tenant, switching mid-request, and resolving the rest from a different tenant. It's likely you'll get inconsistencies.
Say you have a middleware instance that takes an ISomeCoolService. You also have a controller that needs ISomeCoolService, but you're using the special tenant-switching logic in the controller instead of taking it as a dependency. During the middleware execution, the middleware will get tenant A's ISomeCoolService but the controller will use tenant B's ISomeCoolService and now you've got application behavior inconsistency. Trying to ensure consistency with the tenant switching is going to be really, really hard.
Here's what I'd recommend:
If you can do all the tenant determination up front in the ITenantIdentificationStrategy and cache that in, say, HttpContext.Items so you don't have to look it up again - do that. The very first hit in the pipeline might be slow with the initial tenant determination logic but after that the ITenantIdentificationStrategy can look in HttpContext.Items for the tenant ID instead of doing the database call and it'll be fast. This will stop you from having to switch tenants mid-request.
If you can't do the tenant determination up front and you need the pipeline to execute a while before you can figure it out... you may need a different way to determine the tenant. Truly, try to avoid switching tenants. It will cause you subtle problems forever.
Don't try to get "tenant inheritance" working, at least not with the stock Autofac Multitenant support. I recognize that it would be nice to say "some services are tenant A but others are tenant B and it inherits down the stack" but that's not something built into the multitenant support and is going to be really hard to try to force to work.
If you really, really, really are just dedicated to getting this tenant "hierarchy" thing working, you could try forking the Autofac.Multitenant support and implementing a new MultitenantContainer that allows for sub-tenants. The logic of the MultitenantContainer isn't actually that complex, it's just storing a tagged lifetime scope per tenant. Hypothetically, you could add some functionality to enable sub-tenant configuration. It won't be five minutes of work, and it's not really something we'd plan on adding to Autofac, so it would be a total fork that you get to own, but you could possibly do it.
Related
so i want to create some service that accesses external API, and i want to cache common requests from the API inside of that service, it depends on 3 other services, but i want to give it its own instance of cache, MemoryDistributedCache might later be changed for something else
services.AddSingleton<ISomeApi, SomeApi>(provider => new SomeApi(
Configuration.Get<Options>(),
new MemoryDistributedCache(new MemoryCache(new MemoryCacheOptions())),
provider.GetService<ILogger<SomeApi>>()
));
now from my Controllers i can access the api via DI, it works nicely but im not sure if its some sort of an anti-pattern or if there are better ways of doing it
i mean the real problem is separating the internal cache, requesting
IDistributedMemory from one service would give me the same object as if i request it from another service, they must be separated
This sounds like something you could use a proxy or decorator pattern for. The basic problem is that you have a service that does some data access, and another service responsible for caching the results of the first service. I realize you're not using a repository per se, but nonetheless the CachedRepository pattern should work for your needs. See here:
http://ardalis.com/introducing-the-cachedrepository-pattern
and
http://ardalis.com/building-a-cachedrepository-via-strategy-pattern
You can write your cached implementation such that it takes in the actual SomeApi type in its constructor if you don't need that part of the design to be flexible.
I need to make several properties accessible from application's business layer. Those are some ids and common settings. Most of them are valid only through request-response lifespan.
This is a web application (ASP.NET Web Forms to be specific) with dependency injection set up.
Currently those properties are passed through method parameters directly to business layer's services. That works but is not very efficient since:
sometimes parameters' values need to be passed deeper obscuring the readability a bit
some properties should be lazy resolved, and this should be done only once per request
retrieving properties which are resolved by touching a database can be confusing for new developers (there is not unified way of doing this)
some services are constructed by a factory which enriches them with some config parameters
I was thinking about introducing an application context interface, with an implementation in the main project, which would be created on every request. It could be injected to the services directly making them parametrized automatically and independently (services won't need the factory anymore).
Is it how this problem should be tackled or maybe there are some other options?
One option I don't like here is that it might bond the main particle with business layer which is not a perfect example of The Clean Architecture.
Id say you solution is a very common one - inject an 'application context' into your classes. One thing I would be careful of though is making sure you are following the Integration Segregation Principle (from SOLID). Dont just start making all your classes expect an application context instance. Instead, design interfaces that split the application context up, and have your classes expect them as dependencies. Your application context will then need to implement all the interfaces.
This is the correct way to do things as it decouples your classes from implementation. Really your classes don't care if their dependency is from one giant application context, they just care about specific methods implemented by it. This will make your code more robust as you will reduce the risk of breaking something if you change the implementation of the application context later on.
Why don't you use some dependency injection container? Your global settings and parameters can be registered to it as pseudo-singletons and then you will be able to neatly request them from any point inside your application.
I'm quite new to Symfony 2 and I'm moving to advanced topics like services. When should an object be a service?
For example, say that you have a facade object for making a call to a REST service. This class needs a username and password. Would it be correct modeling that class as a global service? Even if it's used only in a portion of the whole project?
# app/config/config.yml
parameters:
my_proxy.username: username
my_proxy.password: password
services:
my_proxy:
class: Acme\TestBundle\MyProxy
arguments: [%my_proxy.username%, %my_proxy.password%]
Definition taken from the Symfony2 glossary:
A Service is a generic term for any PHP object that performs a specific task. A service is usually used "globally", such as a database connection object or an object that delivers email messages. In Symfony2, services are often configured and retrieved from the service container. An application that has many decoupled services is said to follow a service-oriented architecture.
I think your example is a perfect candidate for a service.
You don't want to copy construction code to all places you need your API client. It's better to delegate this task to the dependency injection container.
This way it's easier to maintain (as construction happens in one place and it's configurable).
It's also more flexible as you can easily change the API client class without affecting code which uses it (as long as it implements the same interface).
I don't think there's a golden rule. But basically all classes implementing a task are good candidates for a service. Entities on the other hand are not as they're most often just data holders.
I always recommend Fabien's series of articles on the subject: http://fabien.potencier.org/article/11/what-is-dependency-injection
Yes, because this will spare you the configuration part. You're not going to fetch the username and password and give it to the constructor each time you need this class.
Hi i am creating an API using WCF. My question can be broken down into two seperate ones
1) I have quite a few calls, for instance i have calls relating to the customer, products, orders, employees.
My question is should all this go into one public interface class e.g
public interface IRestService
public class RestService : IRestService
Or should I have one for each call e.g
public interface ICustomer
public class Customer : ICustomer
public interface IProducts
public class Products: IProducts
2) If you have an API which will be accessed by tens of thousands of users and thousands of users concurrently, how would you set up, what will your web config settings be for instance in terms of throttling. Also what setting would you give your InstanceContextMode, or ConcurrencyMode. Finally what type of binding would it be, bearing in mind websites and mobile phones can access the api.
For the sake of good practice, I would break up the API into separate interfaces so you have the option of splitting them into separate implementations in the future. You can still have just one service class implement all of the interfaces, like this:
public class RestService : ICustomer, IProducts, IOrders
However, it sounds as if you'd probably want to make them separate implementations anyway.
In terms of concurrency settings, ask yourself what resources need to be used on each call. If your service class's constructor can be written without any lengthy startup, then use PerCall. If you need to initialize expensive resources, then I'd recommend InstanceContextMode.Single with ConcurrencytMode.Multiple and make sure you write thread-safe code. Eg: make sure you lock() on any class properties or other shared resources before you use them.
Database connections would not count as "expensive to initialize", though, because ADO will do connection pooling for you and eliminate that overhead.
Your throttling settings will be revealed by testing, as Ladislav mentions. You'd want to stress-test your service and use the results to get an idea of how many machines you'd need to service your anticipated load. Then you'll need a dedicated load balancer to route requests as either round-robin, or something that checks the health of each server. Load balancers can be set up to GET a "systemhealth.asp" page and check the results. If you return an "OK" then that machine stays in the pool, or can be temporarily removed from the pool if it times out or returns any other status.
Your binding would need to be WebHTTPBinding for REST. BasicHTTPBinding is meant for SOAP interfaces and doesn't support [WebGet], for example.
If it doesn't have to be a REST service, then you can get a bit more performance by using NetTcpBinding.
If you really have few operations, single service can be used. Generally services are logical collection of related operations but the number of operations should be limited. Usually if your service have more than 20 operations you should think about refactoring.
Do you plan to use REST service? I guess you do because of your first interface example. In such case you need WebHttpBinding (or similar custom binding) with default InstanceContextMode (PerCall) and ConcurrencyMode (Single) values. Only other meaningful combination for REST service is InstanceContextMode.Single and ConcurrencyMode.Multiple but it will create your service as singleton which can have impact on your service implementation. My rule of thumb: Don't use singleton service unless you really need it.
Throttling configuration is dependend on your service implementation and on performance of your servers. What does thousands concurrent users really mean for you? Processing thousands of requests concurrently requires good server cluster with load balancer or hosting in Azure (cloud). All is also dependend on the speed of processing (operation implementation) and size of messages. The correct setting for MaxConcurrentInstances and MaxConcurrentCalls (should be same for PerCall instancing) should be revealed by performance testing. Default values for service throttling have changed in WCF 4.
I'm wondering if it is a good approach in the ASP.NET project if I set a field which "holds" a connection to a DB as a static field (Entity Framework)
public class DBConnector
{
public static AdServiceDB db;
....
}
That means it'll be only one object for entire application to communicate with a DB. I'm also wondering about if that object will be refreshing data changes from DB tables, or maybe it shouldn't be static and I shoud create a connection dyniamically. What do You think ?
With connection pooling in .NET, generally creating a new connection for each request is acceptable. I'd evaluate the performance of creating a new one each time, and if it isn't a bottleneck, then avoid using the static approach. I have tried it before, and while I haven't run into any issues, it doesn't seem to help much.
A singleton connection to a database that is used across multiple web page requests from multiple users presents a large risk of cross-contamination of personal information across users. It doesn't matter what the performance impact is, this is a huge security risk.
If you don't have users or personal information, perhaps this doesn't apply to your project right now, but always keep it in mind. Databases and the information they contain tend to evolve in the direction of more specifics and more details over time.
This is why you should not use a singleton design pattern with your database connection
Hope it helps
Is using a singleton for the connection a good idea in ASP.NET website
Bad idea. Besides the potential mistakes you could make by not closing connections properly and so forth, accessing a static object makes it very difficult to unit test your code. I'd suggest using a class that implements an interface, and then use dependency injection to get an instance of that class wherever you need it. If you determine that you want it to be a singleton, that can be determined in your DI bindings, not as a foundational point of your architecture.
I would say no.
A database connection should be created when needed to run a query and cleaned up after that query is done and the results are fetched.
If you use a single static instance to control all access to the DB, you may lose out on the automatic Connection Pooling that .NET provides (which could impact performance).
I think the recommendation is to "refresh often."
Since none of the answers have been marked as an answer and I don't believe any have really addressed question or issue thereof...
In ASP.NET, you have Global or HttpApplication. The way this works is that IIS will cache instances of your "application" (that is an instance of your Global class). Normally (default settings in IIS) you could have up to 10 instances of Global and IIS will pick any one of these instances in order to satisfy a request.
Further, keep in mind that, there could be multiple requests at any given moment in time. Which means multiple instances of your Global class will be used. These instances could be ones that were previously instantiated and cached or new instances (depending on the load your IIS server is seeing).
IIS also has a notion of App Pools and worker processes. A Worker process will host your application and all the instances of your Global classes (as discussed earlier). So this translates to an App Domain (in .NET terms).
Just to re-cap before moving on…
Multiple instances of your Global class will exist in the Worker process for your application (in IIS). Each one waiting to be called upon by IIS to satisfy a request. IIS will pick any one of these instances. They are effectively threads that have been cached by IIS and each thread has an instance of your Global class. When a request comes in, one of these threads is called upon to handle the request-response cycle. If multiple requests arrive simultaneously, then multiple threads (each contains an instance of your Global class) will be called upon to satisfy each of those requests.
Moving on…
Since there will be only one instance of a static class per App Domain you'll effectively have one instances of your class shared across all (up to 10) instances of Global. This is a bad idea because when multiple simultaneous requests hit your server they'll either be blocked (if your class’s methods use locks) or threads will be stepping on each other’s toes. In other words, this approach is not inherently thread-safe and if you make it thread safe using thread synchronization primitives then you’re unnecessarily blocking threads, negatively impacting performance and scalability of your web application, with no gain whatsoever.
The real solution (and I use this in all my ASP.NET apps) is to have an instance of your BLL or DAL (as the case may be) per instance of Global. This will ensure the following:
1. Multiple threads are not an issue since IIS guarantees one request-response per instance of Global) at any given moment in time. So you’re code is inherently threads-safe.
2. You only have up to 10 instances of your BLL/DAL up and running at any given moment in time ensuring that you're not constantly creating and disposing instances of (typically) large objects to satisfy each request, which on busy sites is huge
3. You get really good performance well due to #2 above.
You do have to ensure that your BLL/DAL is truly stateless or that you reset any state at the start of each Request-Response cycle. You can use the BeginRequest event in Global to do that is you need to.
If you go down this route, be sure to read my blog post on this
Instantiating Business Layers – ASP.NET