Expressing exclusive dependency in karaf - apache-karaf

Is there a way to express an exclusive feature dependency in karaf, i.e. suppose there are two features A and B, both of which provide services amongst which is one with an interface X but whose implementation bundle is different between A and B.
When starting feature B, would it be possible to express in karaf that feature A needs to be unloaded, or otherwise warn the user that two services with the same interface are now active?

No, those kind of dependencies have to be solved on std. OSGi Service level. For example you might want to introduce some extra flag for the service provided by feature B and select this service with a filter while referencing it.

Related

Autofac multitenant - override tenant in runtime

I have a .net core web application that uses Autofac multitenancy container.
The tenant strategy resolves the tenant by looking at the path of the HTTP requests.
However, there is a specific functionality in which a tenant A needs to use configuration of another tenant B (a sub-tenant in this case); the problem is that it is not known until tenant A has already performed some logic to know which sub-tenant's configuration it needs to use.
Is there a way to obtain the service of another tenant in runtime?
I will try to clarify with an example:
What I have is more or less:
An HTTP request to GET my.host.net/A/rules
The tenant resolver is capable of identifying that current tenant is A (it is in the path, just after the host name)
The tenant resolver gets from the database the general rules, and one of them indicates that the configurations of another tenant B should be used
From here on, I would like to use the services of tenant B.
What I have tried / think about?
Save the Multitenant container and use GetTenantScope to resolve the scope of tenant B in a class factory that resolves the services to use. However, I don't know the implications in terms of memory usage and possible problems with mixing tenants
Forget about multitenancy and just save configurations per tenant in specific class.
I'm not sure what "sub-tenant" means in this context. Autofac has no notion of multi-level tenancy in its multitenant support, so while this construct may make sense in the context of your application, trying to make that work in Autofac is not going to be simple.
Trying to switch tenants mid-request is going to be a challenge at best. Things through the request pipeline (middleware, controllers, etc.) are all going to want to use the HttpContext.RequestServices which is set first thing in the request. Like, it's literally the very first middleware that runs. Once it's set, the pipeline starts resolving and controllers and other things start resolving and... it's locked into that tenant. You can't switch it.
Given that, I'd caution you about trying to resolve some things from one tenant, switching mid-request, and resolving the rest from a different tenant. It's likely you'll get inconsistencies.
Say you have a middleware instance that takes an ISomeCoolService. You also have a controller that needs ISomeCoolService, but you're using the special tenant-switching logic in the controller instead of taking it as a dependency. During the middleware execution, the middleware will get tenant A's ISomeCoolService but the controller will use tenant B's ISomeCoolService and now you've got application behavior inconsistency. Trying to ensure consistency with the tenant switching is going to be really, really hard.
Here's what I'd recommend:
If you can do all the tenant determination up front in the ITenantIdentificationStrategy and cache that in, say, HttpContext.Items so you don't have to look it up again - do that. The very first hit in the pipeline might be slow with the initial tenant determination logic but after that the ITenantIdentificationStrategy can look in HttpContext.Items for the tenant ID instead of doing the database call and it'll be fast. This will stop you from having to switch tenants mid-request.
If you can't do the tenant determination up front and you need the pipeline to execute a while before you can figure it out... you may need a different way to determine the tenant. Truly, try to avoid switching tenants. It will cause you subtle problems forever.
Don't try to get "tenant inheritance" working, at least not with the stock Autofac Multitenant support. I recognize that it would be nice to say "some services are tenant A but others are tenant B and it inherits down the stack" but that's not something built into the multitenant support and is going to be really hard to try to force to work.
If you really, really, really are just dedicated to getting this tenant "hierarchy" thing working, you could try forking the Autofac.Multitenant support and implementing a new MultitenantContainer that allows for sub-tenants. The logic of the MultitenantContainer isn't actually that complex, it's just storing a tagged lifetime scope per tenant. Hypothetically, you could add some functionality to enable sub-tenant configuration. It won't be five minutes of work, and it's not really something we'd plan on adding to Autofac, so it would be a total fork that you get to own, but you could possibly do it.

DiagnosticSource - DiagnosticListener - For frameworks only?

There are a few new classes in .NET Core for tracing and distributed tracing. See the markdown docs in here:
https://github.com/dotnet/corefx/tree/master/src/System.Diagnostics.DiagnosticSource/src
As an application developer, should we be instrumenting events in our code, such as sales or inventory depletion etc. using DiagnosticListener instances and then either subscribe and route messages to some metrics store or allow tools like Application Insights to automatically subscribe and push these events to the AI cloud?
OR
Should we create our own metrics collecting abstraction and inject/flow it down the stack "as per normal" and pretend I never saw DiagnosticListener?
I have a similar need to publish "health events" to Service Fabric which I could also solve (abstract) using DiagnosticListener instances sprinkled around.
DiagnosticListener intends to decouple library/app from the tracing system: i.e. any library can use DiagnosticSource` to notify any consumer about interesting operations.
Tracing system can dynamically subscribe to such events and get extensive information about the operation.
If you develop an application and use tracing system that supports DiagnostiListener, e.g. ApplicationInsights, you may use either DiagnosticListener to decouple your code from tracing system or use it's API directly. The latter is more efficient as there is no extra adapter that converts your DS events to AppInsights/other tracing systems events. You can also fine-tune these events more easily.
The former is better if you actually want this layer of abstraction.
You can configure AI to use any DiagnosticListener (by specifying includedDiagnosticSourceActivities) .
If you write a library and want to rely on something available on the platform so that any app can use it without bringing new extra dependencies - DiagnosticListener is your best choice.
Also consider that tracing and metrics collection is different, tracing is much heavier and does not assume any aggregation. If you want just custom-metrics/events without in/out-proc correlation, I'd recommend using tracing system APIs directly.

Application insights SDK - Map inter-resource dependencies

I'm trying to create an end-to-end Application Map using Application Insights. Note all dependencies and metrics are captured and sent using the SDK.
Take the following scenario:
Windows service (batch processing) > (calls) WebAPI > (queries db)
I have 2 Application Insight resources - Windows Service and WebAPI. Both are capturing metrics but in isolation. How can I create a dependency using the SDK between resource 1 (i.e. service) and resource 2 (i.e. WebAPI)? I need to be able to view the Application Map for resource 1 and be able to see the entire end-to-end view of windows service > web service > db.
I can currently see only windows service > WebApi (App Map resource 1) or WebApi > db (App Map resource 2). Need to bring both together somehow?
Application Insights sdk only collect dependencies automatically for HTTP dependencies. Also it only works when the application insights profiler is running on the machine (often installed on azure websites through the Application Insights Extension).
If you happen to be in one of the situations where the new beta sdk is not collecting dependencies for you. You can do that on your own by writing a little bit of code yourself.
The sdk's autocollection code is open source and you can use it to guide you as to how to track these dependencies. The idea is to append the dependency telemetry's target field with the hash of the target component's instrumentation key and set the dependency type to "Application Insights".
Here is how to compute the hash: Compute Hash
Here is how to add it to the target field and set the right dependency type on the dependency telemetry object: Add component correlation to DependencyTelemetryTarget
A little word of caution. There may soon be a change to the format in which the target field is captured / the name of the dependency type (see this discussion). If and when that happens, it would be an easy enough change for you too.
My recommendation would be to use the same Application Insights resources (e.g. instrumentation key) for both your Windows Service and Web API.
You can separate your telemetry for those two services by adding a custom property indicating the service for all telemetry you emit. Easiest way to do this would be to implement a telemetry initializer (see here for documentation).
It is not possible today. Possible ways -
Use a single InstrumentationKey and identify by a custom property (as
suggested by #EranG
Export the data for both the apps and do your own thing
Please vote on this uservoice. Product team is already considering implementing this functionality in future.

PACT: How to guard against consumer generating incorrect contracts

We have two micro-services: Provider and Consumer, both are built independently. Consumer micro-service makes a mistake in how it consumes Provider service (for whatever reason) and as a result, incorrect pact is published to the Pact Broker.
Consumer service build is successful (and can go all the way to release!), but next Provider service build will fail for the wrong reason. So we end up with the broken Provider service build and a broken release of Consumer.
What is the best practice to guard against situations like this?
I was hoping that Pact Broker can trigger the Provider tests automatically when contracts are published and notify Consumers if they fail, but it doesn't seem to be the case.
Thanks!
This is the nature of consumer-driven contracts - the consumer gets a significant say in the API!
As a general rule, if the contract doesn't change, there is no need to run the Provider build, albeit there is currently no easy way to know this in the Broker (see feature request https://github.com/bethesque/pact_broker/issues/48).
As for solutions you could use one or more of the below strategies.
Effective use of code branches
It is of course very important that new assumptions on the contract be validated by the Provider before the Consumer can be safely released. Have branches tested against the Provider before you merge into master.
But most importantly - you must be collaborating closely with the Provider team!
Use source control to detect a modified contract:
If you also checked the master pact files into source control, your CI build could conditionally act - if the contract has changed, you must wait for a green provider build, if not you can safely deploy!
Store in separate repository
If you really want the provider to maintain control, you could store contracts in an intermediate repository or file location managed by the provider. I'd recommend this is a last resort as it negates much of the collaboration pact intends to facilitate.
Use Pact Broker Webhooks:
I was hoping that Pact Broker can trigger the Provider tests automatically when contracts are published and notify Consumers if they fail, but it doesn't seem to be the case.
Yes, this is possible using web hooks on the Pact Broker. You could trigger a build on the Provider as soon as a new contract is submitted to the server.
You could envisage this step working with options 1 and 2.
See Using Pact where the Consumer team is different from the Provider team in our FAQ for more on this use case.
You're spot on, that is one of the current things lacking with the Pact workflow and it's something I've been meaning of working towards once a few other things align.
That being said, in the meantime, this isn't solving your current problem, so I'm going to suggest a potential workaround in your process. Instead of running the test for the consumer, them passing, and then releasing it straight away, you could have the test run on the consumer, then wait for the provider test to come back green before releasing the consumer/provider together. Another way would be to version your provider/consumer interactions (api versioning) so that you can release the consumer beforehand, but isn't "turned on" until the correct version of the provider is released.
None of these solutions are great and I wholeheartedly agree. This is something that I'm quite passionate about and will be working on soon to fix the developer experience with pact broker and releasing the consumer/provider in a better fashion.
Any and all comments are welcome. Cheers.
I think the problem might be caused by the fact that contracts are generated on the consumer side. It means that consumers can modify those contracts how they want. But in the end producer's build will suffer due to incorrect contracts generated by consumers.
Is there any way that contracts are defined by producer? As I think the producer is responsible for maintaining its own contracts. For instance, in case of Spring Cloud Contracts it is recommended to have contacts defined in producer sources (e.g. in the same git repo with producer source code) or in a separate scm repo that can be managed by producer and consumer together.

Should the services in my service layer live in separate projects/DLLs/assemblies?

I have an ASP.NET MVC 3 application that I am currently working on. I am implementing a service layer, which contains the business logic, and which is utilized by the controllers. The services themselves utilize repositories for data access, and the repositories use entity framework to talk to the database.
So top to bottom is: Controller > Service Layer > Repository (each service layer depends on a single injectable repository) > Entity Framework > Single Database.
I am finding myself making items such as UserService, EventService, PaymentService, etc.
In the service layer, I'll have functions such as:
ChargePaymentCard(int cardId, decimal amount) (part of
PaymentService)
ActivateEvent(int eventId) (part of EventService)
SendValidationEmail(int userId) (part of UserService)
Also, as an example of a second place I am using this, I have another simple console application that runs as a scheduled task, which utilizes one of these services. There is also an upcoming second web application that will need to use multiple of these services.
Further, I would like to keep us open to splitting things up (such as our single database) and to moving to a service-oriented architecture down the road, and breaking some of these out into web services (conceivably even to be used by non-.NET apps some day). I've been keeping my eyes open for steps that might make make the leap to SOA less painful down the road.
I have started down the path of creating a separate assembly (DLL) for each service, but am wondering if I have started down the wrong path. I'm trying to be flexible and keep things loosely coupled, but is this path helping me any (towards SOA or in general), or just adding complexity? Should I instead by creating a single assembly/dll, containing my entire service layer, and use that single assembly wherever any services need to be used?
I'm not sure the implications of the path(s) I'm starting down, so any help on this would be appreciated!
IMO - answer is it depends on a lot of factors of your application.
Assuming that you are building a non-trivial application (i.e. is not a college/hobby project to learn SOA):
User Service / Event Service / Payment Service
-- Create its own DLL & expose it as a WCF service if there are more than one applications using this service and if it is too much risk to share the DLL to different application
-- These services should not have inter-dependencies between each other & should focus on their individual area
-- Note: these services might share some common services like logging, authentication, data access etc.
Create a Composition Service
-- This service will do the composition of calls across all the other service
-- For example: if you have an Order placed & the business flow is that Order Placed > Confirm User Exists (User Service) > Raise an OrderPlaced event (Event Service) > Confirm Payment (Payment Service)
-- All such composition of service calls can be handled in this layer
-- Again, depending on the environment, you might choose to expose this service as its own DLL and/or expose it as a WCF
-- Note: this is the only service which will share the references to other services & will be the single point of composition
Now - with this layout - you will have options to call a service directly, if you want to interact with that service alone & you will need to call the composition service if you need a business work flow where different services need to be composed to complete the transaction.
As a starting point, I would recommend that you go through any of the books on SOA architecture - it will help clear a lot of concepts.
I tried to be as short as possible to keep this answer meaningful, there are tons of ways of doing the same thing, this is just one of the possible ways.
HTH.
Having one DLL per service sounds like a bad idea. According to Microsoft, you'd want to have one large assembly over multiple smaller ones due to performance implications (from here via this post).
I would split your base or core services into a separate project and keep most (if not all) services in it. Depending on your needs you may have services that only make sense in the context of a web project or a console app and not anywhere else. Those services should not be part of the "core" service layer and should reside in their appropriate projects.
It is better to separate the services from the consumers. In our peojects we have two levels of separation. We used to group all the service interfaces into one Visual Studio project. All the Service Implementations are grouped into another project.
The consumer of the services needs to reference two dll but it makes the solution more maintainable and scalable. We can have multiple implementations of the services.
For e.g. the service interface can define a contract for WebSearch in the interfaces project. And there can be multiple implementations of the WebSearch through different search service providers like Google search, Bing search, Yahoo search etc.

Resources