Situation
We have one ASP.NET MVC 5 application running along with SQL Server. We have one master database which contains a table Tenants where all of our tenants are registrated with a connection string property to their own personal database.
For authentication we are using the Microsoft Owin library.
Autofac
We have setup autofac like this:
var builder = new ContainerBuilder();
// Register the controllers
builder.RegisterControllers(typeof(Project.Web.ProjectApplication).Assembly);
// ### Register all persistence objects
// Project main database registration ( Peta Poco instance using connectionstring as parameter )
builder.RegisterType<ProjectDatabase>()
.As<ProjectDatabase>()
.WithParameter(new NamedParameter("connectionString", GlobalSettings.ProjectTenantConnectionString))
.InstancePerLifetimeScope();
// Project tenant specific database registration
// ...
// Unit of work
builder.RegisterType<PetaPocoUnitOfWork>()
.As<IDatabaseUnitOfWork>()
.InstancePerRequest();
// ### Register all services
builder.RegisterAssemblyTypes(Assembly.Load("Project.Core"))
.Where(t => t.Name.EndsWith("Service"))
.AsImplementedInterfaces()
.InstancePerLifetimeScope();
// ### Register all repositories
builder.RegisterType<RepositoryFactory>()
.As<IRepositoryFactory>()
.InstancePerLifetimeScope();
builder.RegisterAssemblyTypes(Assembly.Load("Project.Core"))
.Where(t => t.Name.EndsWith("Repository"))
.AsImplementedInterfaces()
.InstancePerLifetimeScope();
// Register Logging
builder.RegisterType<Logger>().As<ILogger>().InstancePerLifetimeScope();
// Register Automapper
builder.RegisterAssemblyTypes(Assembly.Load("Project.Core")).As<Profile>();
builder.RegisterAssemblyTypes(Assembly.Load("Project.Web")).As<Profile>();
builder.Register(context => new MapperConfiguration(cfg =>
{
foreach (var profile in context.Resolve<IEnumerable<Profile>>())
{
cfg.AddProfile(profile);
}
})).AsSelf().SingleInstance();
builder.Register(c => c.Resolve<MapperConfiguration>().CreateMapper(c.Resolve))
.As<AutoMapper.IMapper>()
.InstancePerLifetimeScope();
// Register Owin
builder.Register(ctx => HttpContext.Current.GetOwinContext()).As<IOwinContext>();
builder.Register(
c => new IdentityUserStore(c.Resolve<IUserService>()))
.AsImplementedInterfaces().InstancePerRequest();
builder.Register(
ctx => ctx.Resolve<IOwinContext>().Authentication)
.As<IAuthenticationManager>().InstancePerRequest();
builder.RegisterType<IdentityUserManager>().AsSelf().Inst‌​ancePerRequest();
// Build container
var container = builder.Build();
// Tenant container
var tenantIdentifier = new RequestSubdomainStrategy();
var mtc = new MultitenantContainer(tenantIdentifier, container);
// Set autofac as dependency resolver
DependencyResolver.SetResolver(new AutofacDependencyResolver(mtc));
More details
Using this setup we have an instance setup in Autofac to our master Tenant database.
This is then injected into our PetaPocoUnitOfWork for committing the transaction.
This works, and I can get the tenant information.
But now we need the following to work and we don't have a clue where to start.
How do we setup autofac to register tenants peta poco database instances to inject into the PetaPocoUnitOfWork and how will the app now how to resolve this? Because we need to have access to 2 databases ( the master and the personal tenants database ), first for getting the tenants connection string and then for doing crud operations on the tenants database.
What about our PetaPocoUnitOfWork, which contains the database to work with, should we register this also per tenant and pass the database using the resolving method of autofac and set this on a instance per request?
You can actually have a shard manager [More similar to that of the Microsoft Azure Shard Manager] that takes the connectionstring name and the tenant context. From these information, it can resolve the connection and then pass it on to the Context.
This will be resolved on a per-tenant basis and then the application works with the tenant based connection, i.e. this is injected in each of the services so that the identity established [logged in user identity] could be used to set the right connection object in the EF / Data Tier. This way, it facilitates loose coupled design and also easy to test and mockup.
You can find sample code and little documentation of how such an implementation would look like from my github repository
IMHO, the rationale behind this approach that I suggest would be the fact that the partitions per tenant would be stored in a database [typically your master database] and that would need to be fetched and used even if you are able to some-how inject these via Autofac. I did not reproduce the code here as it takes a bit long explanation to get the code and explanations here, which is being taken care in github.
HTH
Related
In case of a Web API, each request is a distinct scope and dependencies registered as scoped will get resolved per request. So resolving dependencies per request per tenant is easy as the tenant information (like TenantId) can be passed in the HTTP Request headers like below:
services.TryAddScoped<ITenantContext>(x =>
{
var context = x.GetService<IHttpContextAccessor>().HttpContext;
var tenantId = context.Request.Headers["TenantId"].ToString();
var tenantContext = GetTenantContext(tenantId);
return tenantContext;
}
Other registrations first resolve TenantContext and use it to resolve other dependencies. For example, IDatabase will be registered as below. During resolution it will resolve and connect to specific tenant database.
services.TryAddScoped<IDatabase>(x =>
{
var tenantContext = x.GetService<ITenantContext>();
return new Database(tenantContext.DatabaseConnectionString);
}
This is all good in a Web API service because each request is a scope. I am facing challenges using dependency injection in a multi-tenant Console App. Suppose the app processes items from a
multi-tenant queue and each message can belong to a different tenant. While processing each message, it commits data to tenant specific database. So in this case the scope is each message in a queue and message contains the tenantId.
So when the app reads a message from queue, it needs to get TenantContext. Then resolve other dependencies based on this TenantContext.
One straightforward option I see how this dynamic resolution can be achieved is to create the dependent objects manually using the TenantContext but then I wouldn't be able to leverage dependency injection. All objects would get created manually and disposed after going out of scope after the message is processed.
var messgage = GetMessageFromQueue(queueName);
var tenantContext = GetTenantContext(message.TenantId);
var database = GetDatabaseObject(tenantContext);
// Do other processing now we got the database object connected to specific tenant DB
Is there an option in DI where I can pass in the TenantId dynamically so that TenantContext gets set for this scope and then all further resolution within this scope leverage this TenantContext?
Because the role of the tenancy goes beyond the implementation ("this uses X database") and is actually contextual to the action being performed ("this uses X database and must use this connection string based on the context being handled in the action"), there's some risk of assuming that ambient context is present in alternate implementations due to it not expressly being described in your interface in some way, which is where the DI issue is coming up here.
You might be able to:
Update your interfaces so that the tenancy information is an expected parameter of your methods. This ensures that regardless of future implementation, the presence of the tenant ID is explicit in their signature:
public interface ITenantDatabase {
public TResponse Get(string TenantId, int Id);
//... other methods ...
}
Add a factory wrapper around your existing interfaces to handle assigning the context at object creation and have that factory return the IDatabase instance. This is basically what you are proposing manually but with an abstraction around it that you could register and inject to keep the code that leverages it from being responsible for the logic:
public interface ITenantDatabaseFactory {
public IDatabase GetDatabaseForTenant(int TenantId);
}
// Add an implementation that manually generates and returns the scoped objects
I have created a web api that handles the creation of jwt token based on the encrypted user details that it receives in a post request.
In addition to this STS api should also handle the population of the caching layer (Redis or Hazelcast) with all the user data present in the database. Presently I have registered the caching service using dependency injection.This will happen only once when the api is first initialized.
services.AddSingleton<ICacheService, RedisCacheService>();
And in the TokenController added the service as a parameter to initialize the CachingService class and thereby initialize the caching layer.So that when the cacheService object is fist initialized it fetches all the user rows from the database and stores it as a key value pair inside Redis/Hazelcast database.
public TokenController(
ICryptographyService cryptographyService,
crudDBContext crudDBContext,
IConfiguration configuration,
ICacheService cacheService)
{
_cryptographyService = cryptographyService;
_context = crudDBContext;
_config = configuration;
_cacheService = cacheService;
}
But the Token Controller constructor is initialized only when an endpoint is called, so i had to create a separate default [HttpGet] endpoint to ensure that the constructor is called when the STS api is first initialized so as to ensure that the cacheService object gets created and the data gets loaded to the cache.
public ActionResult<string> Get()
{
return "STS";
}
Please let me know if there is a proper way of doing this without calling an endpoint, like be able to use dependency injection but at the same time call some code without the endpoint being called.I need to use dependency injection because i should be able to switch between Redis and Hazelcast by just changing the classname in the startup.cs file.
With respect to Hazelcast and dependency injection: First you would need to use the sources and not the Hazelcast NuGet version. Next the configuration depends on if you are in a Container Environment or a Hosted Environment. In both cases configuration keys will be gathered from the same sources and in the same order, and options will be registered in the service container, and available via dependency injection
We are trying to implement Azure service bus for managing user "Work Queues"
Background:
We have a web UI pushing new items to a Web API which are persisted to a DB and then pushed to a Service Bus Queue. All messages have a property which denote who can work on the message. For providing a user with the ability to pick messages assigned to them, I am thinking about creating a topic with subscriptions that filter on that property.
Approach
To achieve the above mentioned capability:
I need to register a sender for the queue and a sender for the topic all within the same Web API. I have tried adding the two senders as Singletons but during DI, how do I pick which sender to use ?
services.TryAddSingleton(implementationFactory =>
{
var serviceBusConfiguration = implementationFactory.GetRequiredService<IMessagingServiceConfiguration>();
var serviceBusClient = new ServiceBusClient(serviceBusConfiguration.IntakeQueueSendConnectionString);
var serviceBusSender = serviceBusClient.CreateSender(serviceBusConfiguration.IntakeQueueName);
return serviceBusSender;
});
services.TryAddSingleton(implementationFactory =>
{
var serviceBusConfiguration = implementationFactory.GetRequiredService<IMessagingServiceConfiguration>();
var serviceBusClient = new ServiceBusClient(serviceBusConfiguration.TopicConnectionString);
var topicSender = serviceBusClient.CreateSender(serviceBusConfiguration.TopicName);
return topicSender;
});
I am using the above setup to add the services as singletons and individually I am able to send and receive messages from either the topic or the queue.
How can I register both the implementations and pick which one should be injected when I use DI in the constructor to consume it.
With respect to resolving DI registration for multiple instances of the same type, the first answer to this question illustrates using a service resolver with ASP.NET Core. To my knowledge, that is still the best approach.
For the senders, you could differentiate by checking their EntityPath property to identify whether they point to your queue or topic.
I'm currently working on a C# UWP application that runs on Windows 10 IoT Core OS on an ARM processor. For this application, I am using a SQLite DB for my persistence, with Entity Framework Core as my ORM.
I have created my own DBContext and call the Migrate function on startup which creates my DB. I can also successfully create a DBContext instance in my main logic which can successfully read/write data using the model. All good so far.
However, I've noticed that the performance of creating a DbContext for each interaction with the DB is painfully slow. Although I can guarantee that only my application is accessing the database (I'm running on custom hardware with a controlled software environment), I do have multiple threads in my application that need to access the database via the DbContext.
I need to find a way to optimize the connection to my SQLite DB in a way that is thread safe in my application. As I mentioned before, I don't have to worry about any external applications.
At first, I tried to create a SqliteConnection object externally and then pass it in to each DbContext that I create:
_connection = new SqliteConnection(#"Data Source=main.db");
... and then make that available to my DbContext and use in in the OnConfiguring override:
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseSqlite(_connection);
}
... and then use the DbContext in my application like this:
using (var db = new MyDbContext())
{
var data = new MyData { Timestamp = DateTime.UtcNow, Data = "123" };
db.MyData.Add(data);
db.SaveChanges();
}
// Example data read
MyDataListView.ItemsSource = db.MyData.ToList();
Taking the above approach, I noticed that the connection is closed down automatically when the DbContext is disposed, regardless of the fact that the connection was created externally. So this ends up throwing an exception the second time I create a DbContext with the connection.
Secondly, I tried to create a single DbContext once statically and share it across my entire application. So instead of creating the DbContext in a using statement as above, I tried the following:
// Where Context property returns a singleton instance of MyDbContext
var db = MyDbContextFactory.Context;
var data = new MyData { Timestamp = DateTime.UtcNow, Data = "123" };
db.MyData.Add(data);
db.SaveChanges();
This offers me the performance improvements I hoped for but I quickly realized that this is not thread safe and wider reading has confirmed that I shouldn't do this.
So does anyone have any advice on how to improve the performance when accessing SQLite DB in my case with EF Core and a multi-threaded UWP application? Many thanks in advance.
Secondly, I tried to create a single DbContext once statically and share it across my entire application. So instead of creating the DbContext in a using statement as above, I tried the following...This offers me the performance improvements I hoped for but I quickly realized that this is not thread safe and wider reading has confirmed that I shouldn't do this.
I don't know why we shouldn't do this. Maybe you can share something about what you read. But I think, you can make the DBContext object global and static and when you want to do CRUD, you can do it in main thread like this:
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
//App.BloggingDB is the static global DBContext defined in App class
var blog = new Blog { Url = NewBlogUrl.Text };
App.BloggingDB.Add(blog);
App.BloggingDB.SaveChanges();
});
But do dispose the DBContext at a proper time as it won't automatically get disposed.
I am trying to retrieve secrets from Azure Key Vault using Service Identity in an ASPNet 4.6.2 web application. I am using the code as outlined in this article. Locally, things are working fine, though this is because it is using my identity. When I deploy the application to Azure I get an exception when keyVaultClient.GetSecretAsync(keyUrl) is called.
As best as I can tell everything is configured correctly. I created a User assigned identity so it could be reused and made sure that identity had get access to secrets and keys in the KeyVault policy.
The exception is an AzureServiceTokenProviderException. It is verbose and outlines how it tried four methods to authenticate. The information I'm concerned about is when it tries to use Managed Service Identity:
Tried to get token using Managed Service Identity. Access token could
not be acquired. MSI ResponseCode: BadRequest, Response:
I checked application insights and saw that it tried to make the following connection with a 400 result error:
http://127.0.0.1:41340/MSI/token/?resource=https://vault.azure.net&api-version=2017-09-01
There are two things interesting about this:
Why is it trying to connect to a localhost address? This seems wrong.
Could this be getting a 400 back because the resource parameter isn't escaped?
In the MsiAccessTokenProvider source, it only uses that form of an address when the environment variables MSI_ENDPOINT and MSI_SECRET are set. They are not set in application settings, but I can see them in the debug console when I output environment variables.
At this point I don't know what to do. The examples online all make it seem like magic, but if I'm right about the source of the problem then there's some obscure automated setting that needs fixing.
For completeness here is all of my relevant code:
public class ServiceIdentityKeyVaultUtil : IDisposable
{
private readonly AzureServiceTokenProvider azureServiceTokenProvider;
private readonly Uri baseSecretsUri;
private readonly KeyVaultClient keyVaultClient;
public ServiceIdentityKeyVaultUtil(string baseKeyVaultUrl)
{
baseSecretsUri = new Uri(new Uri(baseKeyVaultUrl, UriKind.Absolute), "secrets/");
azureServiceTokenProvider = new AzureServiceTokenProvider();
keyVaultClient = new KeyVaultClient(
new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
}
public async Task<string> GetSecretAsync(string key, CancellationToken cancellationToken = new CancellationToken())
{
var keyUrl = new Uri(baseSecretsUri, key).ToString();
try
{
var secret = await keyVaultClient.GetSecretAsync(keyUrl, cancellationToken);
return secret.Value;
}
catch (Exception ex)
{
/** rethrows error with extra details */
}
}
/** IDisposable support */
}
UPDATE #2 (I erased update #1)
I created a completely new app or a new service instance and was able to recreate the error. However, in all instances I was using a User Assigned Identity. If I remove that and use a System Assigned Identity then it works just fine.
I don't know why these would be any different. Anybody have an insight as I would prefer the user assigned one.
One of the key differences of a user assigned identity is that you can assign it to multiple services. It exists as a separate asset in azure whereas a system identity is bound to the life cycle of the service to which it is paired.
From the docs:
A system-assigned managed identity is enabled directly on an Azure service instance. When the identity is enabled, Azure creates an identity for the instance in the Azure AD tenant that's trusted by the subscription of the instance. After the identity is created, the credentials are provisioned onto the instance. The lifecycle of a system-assigned identity is directly tied to the Azure service instance that it's enabled on. If the instance is deleted, Azure automatically cleans up the credentials and the identity in Azure AD.
A user-assigned managed identity is created as a standalone Azure resource. Through a create process, Azure creates an identity in the Azure AD tenant that's trusted by the subscription in use. After the identity is created, the identity can be assigned to one or more Azure service instances. The lifecycle of a user-assigned identity is managed separately from the lifecycle of the Azure service instances to which it's assigned.
User assigned identities are still in preview for App Services. See the documentation here. It may still be in private preview (i.e. Microsoft has to explicitly enable it on your subscription), it may not be available in the region you have selected, or it could be a defect.
To use a user-assigned identity, the HTTP call to get a token must include the identity's id.
Otherwise it will attempt to use a system-assigned identity.
Why is it trying to connect to a localhost address? This seems wrong.
Because the MSI endpoint is local to App Service, only accessible from within the instance.
Could this be getting a 400 back because the resource parameter isn't escaped?
Yes, but I don't think that was the reason here.
In the MsiAccessTokenProvider source, it only uses that form of an address when the environment variables MSI_ENDPOINT and MSI_SECRET are set. They are not set in application settings, but I can see them in the debug console when I output environment variables.
These are added by App Service invisibly, not added to app settings.
As for how to use the user-assigned identity,
I couldn't see a way to do that with the AppAuthentication library.
You could make the HTTP call manually in Azure: https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#get-a-token-using-http.
Then you gotta take care of caching yourself though!
Managed identity endpoints can't handle a lot of queries at one time :)