We are trying to implement Azure service bus for managing user "Work Queues"
Background:
We have a web UI pushing new items to a Web API which are persisted to a DB and then pushed to a Service Bus Queue. All messages have a property which denote who can work on the message. For providing a user with the ability to pick messages assigned to them, I am thinking about creating a topic with subscriptions that filter on that property.
Approach
To achieve the above mentioned capability:
I need to register a sender for the queue and a sender for the topic all within the same Web API. I have tried adding the two senders as Singletons but during DI, how do I pick which sender to use ?
services.TryAddSingleton(implementationFactory =>
{
var serviceBusConfiguration = implementationFactory.GetRequiredService<IMessagingServiceConfiguration>();
var serviceBusClient = new ServiceBusClient(serviceBusConfiguration.IntakeQueueSendConnectionString);
var serviceBusSender = serviceBusClient.CreateSender(serviceBusConfiguration.IntakeQueueName);
return serviceBusSender;
});
services.TryAddSingleton(implementationFactory =>
{
var serviceBusConfiguration = implementationFactory.GetRequiredService<IMessagingServiceConfiguration>();
var serviceBusClient = new ServiceBusClient(serviceBusConfiguration.TopicConnectionString);
var topicSender = serviceBusClient.CreateSender(serviceBusConfiguration.TopicName);
return topicSender;
});
I am using the above setup to add the services as singletons and individually I am able to send and receive messages from either the topic or the queue.
How can I register both the implementations and pick which one should be injected when I use DI in the constructor to consume it.
With respect to resolving DI registration for multiple instances of the same type, the first answer to this question illustrates using a service resolver with ASP.NET Core. To my knowledge, that is still the best approach.
For the senders, you could differentiate by checking their EntityPath property to identify whether they point to your queue or topic.
Related
In case of a Web API, each request is a distinct scope and dependencies registered as scoped will get resolved per request. So resolving dependencies per request per tenant is easy as the tenant information (like TenantId) can be passed in the HTTP Request headers like below:
services.TryAddScoped<ITenantContext>(x =>
{
var context = x.GetService<IHttpContextAccessor>().HttpContext;
var tenantId = context.Request.Headers["TenantId"].ToString();
var tenantContext = GetTenantContext(tenantId);
return tenantContext;
}
Other registrations first resolve TenantContext and use it to resolve other dependencies. For example, IDatabase will be registered as below. During resolution it will resolve and connect to specific tenant database.
services.TryAddScoped<IDatabase>(x =>
{
var tenantContext = x.GetService<ITenantContext>();
return new Database(tenantContext.DatabaseConnectionString);
}
This is all good in a Web API service because each request is a scope. I am facing challenges using dependency injection in a multi-tenant Console App. Suppose the app processes items from a
multi-tenant queue and each message can belong to a different tenant. While processing each message, it commits data to tenant specific database. So in this case the scope is each message in a queue and message contains the tenantId.
So when the app reads a message from queue, it needs to get TenantContext. Then resolve other dependencies based on this TenantContext.
One straightforward option I see how this dynamic resolution can be achieved is to create the dependent objects manually using the TenantContext but then I wouldn't be able to leverage dependency injection. All objects would get created manually and disposed after going out of scope after the message is processed.
var messgage = GetMessageFromQueue(queueName);
var tenantContext = GetTenantContext(message.TenantId);
var database = GetDatabaseObject(tenantContext);
// Do other processing now we got the database object connected to specific tenant DB
Is there an option in DI where I can pass in the TenantId dynamically so that TenantContext gets set for this scope and then all further resolution within this scope leverage this TenantContext?
Because the role of the tenancy goes beyond the implementation ("this uses X database") and is actually contextual to the action being performed ("this uses X database and must use this connection string based on the context being handled in the action"), there's some risk of assuming that ambient context is present in alternate implementations due to it not expressly being described in your interface in some way, which is where the DI issue is coming up here.
You might be able to:
Update your interfaces so that the tenancy information is an expected parameter of your methods. This ensures that regardless of future implementation, the presence of the tenant ID is explicit in their signature:
public interface ITenantDatabase {
public TResponse Get(string TenantId, int Id);
//... other methods ...
}
Add a factory wrapper around your existing interfaces to handle assigning the context at object creation and have that factory return the IDatabase instance. This is basically what you are proposing manually but with an abstraction around it that you could register and inject to keep the code that leverages it from being responsible for the logic:
public interface ITenantDatabaseFactory {
public IDatabase GetDatabaseForTenant(int TenantId);
}
// Add an implementation that manually generates and returns the scoped objects
In our system, we have one C++ component acting as a Thrift Server, and one .netCore/C# component as a client.
So far, I was managing a single connection, so using a singleton to create my ThriftPushClientWrapper which implements TBaseClient. (via the generated object from the thrift interface)
.AddSingleton<IThriftPushClientWrapper>(sp =>
{
var localIpAddress = IPAddress.Parse(serverIp);
var transport = new TSocketTransport(localIpAddress, dataPort);
var protocol = new TBinaryProtocol(transport);
return new ThriftPushClientWrapper(protocol);
});
(so far using 0.13 version of the Thrift library, need to update to 0.14.1 soon, but wonder if the server part must be updated too/first).
This is working great.
Now, I want multiple clients that can connect to the server simultaneously, all on the same ip:port
So I am starting a ClientFactory, but wonder how to deal with the creation of the client.
To be more precise, the server part is configured for 5 threads, so I need 5 clients.
One simple approach would be to create a new client each time, but probably inefficient.
A better approach is to have a collection of 5 clients, and using the next available free one.
So I started with the following factory, where I should get the index from outside.
private readonly ConcurrentDictionary<int, IThriftPushClientWrapper> _clientDict;
public IThriftPushClientWrapper GetNextAvailablePushClient(int index)
{
IThriftPushClientWrapper client;
if (_clientDict.ContainsKey(index))
{
if (_clientDict.TryGetValue(index, out client) && client != null)
return client;
else // error handling
}
// add new client for the expecting index
client = CreateNewPushClient();
_clientDict.TryAdd(index, client);
return client;
}
private IThriftPushClientWrapper CreateNewPushClient()
{
var localIpAddress = IPAddress.Parse(serverIp);
var transport = new TSocketTransport(localIpAddress, dataPort);
var protocol = new TBinaryProtocol(transport);
return new ThriftPushClientWrapper(protocol);
}
My next issue it to determine how to set the index from outside.
I started with a SemaphoreSlim(5,5) using the semaphore.CurrentCount as index, but probably not the best idea. Also tried with a rolling index from 0 to 5. But apparently, a CancellationToken is used to cancel further procceesing. Not sure the root cause yet.
Is it possible to determine whether a TBaseClient is currently busy or available?
What is the recommended strategy to deal with a pool of clients?
The easiest solution to solve this is to do it right. If you are going to use some resource from a pool of resources, either get it off the pool, or mark it used in some suitable way for that time.
It's notable that the question has nothing to do with Thrift in particular. You are trying to solve a weak resource management approach by trying to leverage other peoples code that was never intended to work in such a context.
Regarding how to implement object pooling, this other question can provide further advice. Also keep in mind that especially on Windows platforms not all system resources can be shared freely across threads.
I am trying to read all existing messages on an Azure ServiceBus Subscription, using the Microsoft.Azure.ServiceBus.dll (in .Net Core 2.1) but am struggling.
I've found many examples that the following should work, but it doesn't:
var client = new SubscriptionClient(ServiceBusConnectionString, topicName, subscription, ReceiveMode.PeekLock, null);
var totalRetrieved = 0;
while (totalRetrieved < count)
{
var messageEnumerable = subscriptionClient.PeekBatch(count);
//// ... code removed from this example as not relevant
}
My issue is that the .PeekBatch method isn't available, and I'm confused as to how I need to approach this.
I've downloaded the source for the ServiceBusExplorer from GitHub (https://github.com/paolosalvatori/ServiceBusExplorer) and the above code example is pretty much as it's doing it. But not in .Net Core / Microsoft.Azure.ServiceBus namespace.
For clarity though, I'm trying to read messages that are already on the queue - I've worked through other examples that create listeners that respond to new messages, but I need to work in this disconnected manner, after the message has already been placed on the queue.
ServiceBusExplorer uses WindowsAzure.ServiceBus Library, which is a .Net Framework Library and you cannot use it in .Net Core applications. You should use Microsoft.Azure.ServiceBus (.Net Standard Library) in .Net Core applications.
Check here for samples of Microsoft.Azure.ServiceBus
var client = new SubscriptionClient(ServiceBusConnectionString, topicName, subscription, ReceiveMode.PeekLock, null);
client .RegisterMessageHandler(
async (message, token) =>
{
await subscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
}
);
Try using RegisterMessageHandler. It will
receive messages continuously from the entity. It registers a message handler and
begins a new thread to receive messages. This handler is awaited
on every time a new message is received by the receiver.
I've been looking for a good way to do, but haven't found anything that doesn't seem hacky. I want to signal the client without going through the database and a subscription. For example, in a game I want to send a message to the client to display "Player 1 almost scores!". I don't care about this information in the long run, so I don't want to push it to the DB. I guess I could just set up another socket.io, but I'd rather not have to manage a second connection if there is a good way to go it within meteor. Thanks! (BTW, have looked at Meteor Streams, but it appears to have gone inactive)
You know that Meteor provides real-time communication from the server to clients through Publish and Subscribe mechanism, which is typically used to send your MongoDB data and later modifications.
You would like a similar push system but without having to record some data into your MongoDB.
It is totally possible re-using the Meteor Pub/Sub system but without the database part: while with Meteor.publish you typically return a Collection Cursor, hence data from your DB, you can also use its low-level API to send arbitrary real-time information:
Alternatively, a publish function can directly control its published record set by calling the functions added (to add a new document to the published record set), changed (to change or clear some fields on a document already in the published record set), and removed (to remove documents from the published record set). […]
Simply do not return anything, use the above mentioned methods and do not forget calling this.ready() by the end of your publish function.
See also the Guide about Custom publications
// SERVER
const customCollectionName = 'collection-name';
let sender; // <== we will keep a reference to the publisher
Meteor.publish('custom-publication', function() {
sender = this;
this.ready();
this.onStop(() => {
// Called when a Client stops its Subscription
});
});
// Later on…
// ==> Send a "new document" as a new signal message
sender.added(customCollectionName, 'someId', {
// "new document"
field: 'values2'
});
// CLIENT
const signalsCollectionName = 'collection-name'; // Must match what is used in Server
const Signals = new Mongo.Collection(signalsCollectionName);
Meteor.subscribe('custom-publication'); // As usual, must match what is used in Server
// Then use the Collection low-level API
// to listen to changes and act accordingly
// https://docs.meteor.com/api/collections.html#Mongo-Cursor-observe
const allSignalsCursor = Signals.find();
allSignalsCursor.observe({
added: (newDocument) => {
// Do your stuff with the received document.
}
});
Then how and when you use sender.added() is totally up to you.
Note: keep in mind that it will send data individually to a Client (each Client has their own Server session)
If you want to broadcast messages to several Clients simultaneously, then the easiest way is to use your MongoDB as the glue between your Server sessions. If you do not care about actual persistence, then simply re-use the same document over and over and listen to changes instead of additions in your Client Collection Cursor observer.
It's completly fine to use the database for such a task.
Maybe create a collection of "Streams" where you store the intended receiver and the message, the client subscribe to his stream and watches any changes on it.
You can then delete the stream from the database after the client is done with it.
This is a lot easier than reinventing the wheel and writing everything from scratch.
Situation
We have one ASP.NET MVC 5 application running along with SQL Server. We have one master database which contains a table Tenants where all of our tenants are registrated with a connection string property to their own personal database.
For authentication we are using the Microsoft Owin library.
Autofac
We have setup autofac like this:
var builder = new ContainerBuilder();
// Register the controllers
builder.RegisterControllers(typeof(Project.Web.ProjectApplication).Assembly);
// ### Register all persistence objects
// Project main database registration ( Peta Poco instance using connectionstring as parameter )
builder.RegisterType<ProjectDatabase>()
.As<ProjectDatabase>()
.WithParameter(new NamedParameter("connectionString", GlobalSettings.ProjectTenantConnectionString))
.InstancePerLifetimeScope();
// Project tenant specific database registration
// ...
// Unit of work
builder.RegisterType<PetaPocoUnitOfWork>()
.As<IDatabaseUnitOfWork>()
.InstancePerRequest();
// ### Register all services
builder.RegisterAssemblyTypes(Assembly.Load("Project.Core"))
.Where(t => t.Name.EndsWith("Service"))
.AsImplementedInterfaces()
.InstancePerLifetimeScope();
// ### Register all repositories
builder.RegisterType<RepositoryFactory>()
.As<IRepositoryFactory>()
.InstancePerLifetimeScope();
builder.RegisterAssemblyTypes(Assembly.Load("Project.Core"))
.Where(t => t.Name.EndsWith("Repository"))
.AsImplementedInterfaces()
.InstancePerLifetimeScope();
// Register Logging
builder.RegisterType<Logger>().As<ILogger>().InstancePerLifetimeScope();
// Register Automapper
builder.RegisterAssemblyTypes(Assembly.Load("Project.Core")).As<Profile>();
builder.RegisterAssemblyTypes(Assembly.Load("Project.Web")).As<Profile>();
builder.Register(context => new MapperConfiguration(cfg =>
{
foreach (var profile in context.Resolve<IEnumerable<Profile>>())
{
cfg.AddProfile(profile);
}
})).AsSelf().SingleInstance();
builder.Register(c => c.Resolve<MapperConfiguration>().CreateMapper(c.Resolve))
.As<AutoMapper.IMapper>()
.InstancePerLifetimeScope();
// Register Owin
builder.Register(ctx => HttpContext.Current.GetOwinContext()).As<IOwinContext>();
builder.Register(
c => new IdentityUserStore(c.Resolve<IUserService>()))
.AsImplementedInterfaces().InstancePerRequest();
builder.Register(
ctx => ctx.Resolve<IOwinContext>().Authentication)
.As<IAuthenticationManager>().InstancePerRequest();
builder.RegisterType<IdentityUserManager>().AsSelf().InstancePerRequest();
// Build container
var container = builder.Build();
// Tenant container
var tenantIdentifier = new RequestSubdomainStrategy();
var mtc = new MultitenantContainer(tenantIdentifier, container);
// Set autofac as dependency resolver
DependencyResolver.SetResolver(new AutofacDependencyResolver(mtc));
More details
Using this setup we have an instance setup in Autofac to our master Tenant database.
This is then injected into our PetaPocoUnitOfWork for committing the transaction.
This works, and I can get the tenant information.
But now we need the following to work and we don't have a clue where to start.
How do we setup autofac to register tenants peta poco database instances to inject into the PetaPocoUnitOfWork and how will the app now how to resolve this? Because we need to have access to 2 databases ( the master and the personal tenants database ), first for getting the tenants connection string and then for doing crud operations on the tenants database.
What about our PetaPocoUnitOfWork, which contains the database to work with, should we register this also per tenant and pass the database using the resolving method of autofac and set this on a instance per request?
You can actually have a shard manager [More similar to that of the Microsoft Azure Shard Manager] that takes the connectionstring name and the tenant context. From these information, it can resolve the connection and then pass it on to the Context.
This will be resolved on a per-tenant basis and then the application works with the tenant based connection, i.e. this is injected in each of the services so that the identity established [logged in user identity] could be used to set the right connection object in the EF / Data Tier. This way, it facilitates loose coupled design and also easy to test and mockup.
You can find sample code and little documentation of how such an implementation would look like from my github repository
IMHO, the rationale behind this approach that I suggest would be the fact that the partitions per tenant would be stored in a database [typically your master database] and that would need to be fetched and used even if you are able to some-how inject these via Autofac. I did not reproduce the code here as it takes a bit long explanation to get the code and explanations here, which is being taken care in github.
HTH