Verify CORS origin in .net 6 when DI is needed - request-pipeline

Our service allows the customers to enter their own domain to access the service, after setting the proper DNS records. Their domain/subdomain is pointed to our front end server which makes it more personalized for their users.
Since any customer can change their domain at any time, or even add new ones, we need to make a database call to verify the origin-header.
I've been trying to simply NOT handle cors as usual (app.UseCors() etc.), but instead using filters to resolve the "header-situation". I gave up on this approach because I realized that it just won't work (no endpoint with "OPTIONS" allowed). It also feels dirty, even if it would have worked.
I tried implementing ICorsPolicyProvider creating a custom CORS attribute but this seems to be a dead end since I can't find a way to inject the services I need to verify the origin.
Is there a way to verify the Origin-header while having access to needed services?

The solution is to implement ICorsService. The default implementation (CorsService) is easy to understand and is a good starting point.
In your services configuration, replace the call
services.AddCors();
with:
services.AddTransient<ICorsService, YourImplementationOfICorsService>()
.AddTransient<ICorsPolicyProvider, DefaultCorsPolicyProvider>();
Everything else is the same. The only drawback of this solution is that no scoped dependencies may be injected into your ICorsService implementation.

Related

Autofac multitenant - override tenant in runtime

I have a .net core web application that uses Autofac multitenancy container.
The tenant strategy resolves the tenant by looking at the path of the HTTP requests.
However, there is a specific functionality in which a tenant A needs to use configuration of another tenant B (a sub-tenant in this case); the problem is that it is not known until tenant A has already performed some logic to know which sub-tenant's configuration it needs to use.
Is there a way to obtain the service of another tenant in runtime?
I will try to clarify with an example:
What I have is more or less:
An HTTP request to GET my.host.net/A/rules
The tenant resolver is capable of identifying that current tenant is A (it is in the path, just after the host name)
The tenant resolver gets from the database the general rules, and one of them indicates that the configurations of another tenant B should be used
From here on, I would like to use the services of tenant B.
What I have tried / think about?
Save the Multitenant container and use GetTenantScope to resolve the scope of tenant B in a class factory that resolves the services to use. However, I don't know the implications in terms of memory usage and possible problems with mixing tenants
Forget about multitenancy and just save configurations per tenant in specific class.
I'm not sure what "sub-tenant" means in this context. Autofac has no notion of multi-level tenancy in its multitenant support, so while this construct may make sense in the context of your application, trying to make that work in Autofac is not going to be simple.
Trying to switch tenants mid-request is going to be a challenge at best. Things through the request pipeline (middleware, controllers, etc.) are all going to want to use the HttpContext.RequestServices which is set first thing in the request. Like, it's literally the very first middleware that runs. Once it's set, the pipeline starts resolving and controllers and other things start resolving and... it's locked into that tenant. You can't switch it.
Given that, I'd caution you about trying to resolve some things from one tenant, switching mid-request, and resolving the rest from a different tenant. It's likely you'll get inconsistencies.
Say you have a middleware instance that takes an ISomeCoolService. You also have a controller that needs ISomeCoolService, but you're using the special tenant-switching logic in the controller instead of taking it as a dependency. During the middleware execution, the middleware will get tenant A's ISomeCoolService but the controller will use tenant B's ISomeCoolService and now you've got application behavior inconsistency. Trying to ensure consistency with the tenant switching is going to be really, really hard.
Here's what I'd recommend:
If you can do all the tenant determination up front in the ITenantIdentificationStrategy and cache that in, say, HttpContext.Items so you don't have to look it up again - do that. The very first hit in the pipeline might be slow with the initial tenant determination logic but after that the ITenantIdentificationStrategy can look in HttpContext.Items for the tenant ID instead of doing the database call and it'll be fast. This will stop you from having to switch tenants mid-request.
If you can't do the tenant determination up front and you need the pipeline to execute a while before you can figure it out... you may need a different way to determine the tenant. Truly, try to avoid switching tenants. It will cause you subtle problems forever.
Don't try to get "tenant inheritance" working, at least not with the stock Autofac Multitenant support. I recognize that it would be nice to say "some services are tenant A but others are tenant B and it inherits down the stack" but that's not something built into the multitenant support and is going to be really hard to try to force to work.
If you really, really, really are just dedicated to getting this tenant "hierarchy" thing working, you could try forking the Autofac.Multitenant support and implementing a new MultitenantContainer that allows for sub-tenants. The logic of the MultitenantContainer isn't actually that complex, it's just storing a tagged lifetime scope per tenant. Hypothetically, you could add some functionality to enable sub-tenant configuration. It won't be five minutes of work, and it's not really something we'd plan on adding to Autofac, so it would be a total fork that you get to own, but you could possibly do it.

How to access dependency injection container in Symfony 4 without actual injection?

I've got a project written in Symfony 4 (can update to the latest version if needed). In it I have a situation similar to this:
There is a controller which sends requests to an external system. It goes through records in the DB and sends a request for every row. To do that there is an MagicApiConnector class which connects to the external system, and for every request there is a XxxRequest class (like FooRequest, BarRequest, etc).
So, something like this general:
foreach ( $allRows as $row ) {
$request = new FooRequest($row['a'], $row['b']);
$connector->send($request);
}
Now in order to do all the parameter filling magic, the requests need to access a service which is defined in Symfony's DI. The controller itself neither knows nor cares about this service, but the requests need it.
How can my request classes access this service? I don't want to set it as a dependency of the controller - I could, but it kinda seems awkward, as the controller really doesn't care about it and would only pass it through. It's an implementation detail of the request, and I feel like it shouldn't burden the users of the request with this boilerplate requirement.
Then again, sometimes you need to make a sacrifice in the name of the greater good, so perhaps this is one of those cases? It feels like I'm "going against the grain" and haven't grasped some ideological concept.
Added: OK, the full gory details, no simplification.
This all is happening in the context of two homebrew systems. Let's call them OldApp and NewApp. Both are APIs and NewApp is calling into the OldApp. The APIs are simple REST/JSON style. OldApp is not built on Symfony (mostly even doesn't use a framework), the NewApp is. My question is about NewApp.
The authentication for OldApp APIs comes in three different flavors and might get more in the future if needed (it's not yet dead!) Different API calls use different authentication methods; sometimes even the same API call can be used with different methods (depending on who is calling it). All these authentication methods are also homebrew. One uses POST fields, another uses custom HTTP headers, don't remember about the third.
Now, NewApp is being called by an Android app which is distributed to many users. Android app actually uses both NewApp and OldApp. When it calls NewApp it passes along extra HTTP headers with authentication data for OldApp (method 1). Thus NewApp can impersonate the Android app user for OldApp. In addition, NewApp also needs to use a special command of OldApp that users themselves cannot call (a question of privilege). Therefore it uses a different authentication mechanism (method 2) for that command. The parameters for that command are stored in local configuration (environment variables).
Before me, a colleague had created the scheme of a APIConnector and APICommand where you get the connector as a dependency and create command instances as needed. The connector actually performs the HTTP request; the commands tell it what POST fields and what headers to send. I wish to keep this scheme.
But now how do the different authentication mechanisms fit into this? Each command should be able to pass what it needs to the connector; and the mechanisms should be reusable for multiple commands. But one needs access to the incoming request, the other needs access to configuration parameters. And neither is instantiated through DI. How to do this elegantly?
This sounds like a job for factories.
function action(MyRequestFactory $requestFactory)
{
foreach ( $allRows as $row ) {
$request = $requestFactory->createFoo($row['a'], $row['b']);
$connector->send($request);
}
The factory itself as a service and injected into the controller as part of the normal Symfony design. Whatever additional services that are needed will be injected into the factory. The factory in turn can provide whatever services the individual requests might happen to need as it creates the request.

Application Insights end-to-end multi component logging in Azure Functions

End-to-end logging between multiple components using Application Insights (AI) is based on a hierarchical Request-Id header. So each component is responsible to honer a possible incoming Request-Id. To get the full end-to-end hierarchical flow correct in Application Insights the Request-Id header needs to be used as the AI Operation.Id and Operation.ParentId (as described here).
But when making a request with a Request-Id header to a Azure Function using a HttpTrigger binding for example (Microsoft.NET.Sdk.Functions 1.0.24) with integrated Application Insights configured (as described here) a completely new Operation.Id is created and used - causing the whole flow in AI to be lost. Any ideas on how to get around this?
Setting up a separate custom TelemetryClient might be an option. But seems to require a lot of configuration to get the full ExceptionTrackingTelemetryModule and DependencyTrackingTelemetryModule right - especially when using Functions v2 and Core (ref to AI config). Anyone go that successfully working?
This is not yet supported by Functions but should start working sometime early next year.
If you want to hack it through, you can add a reference to ApplicationInsights SDK for AspNetCore (v 2.4.1) and configure RequestTrackingTelemetryModule.
static Function1()
{
requestModule = new RequestTrackingTelemetryModule();
requestModule.Initialize(TelemetryConfiguration.Active);
}
private static RequestTrackingTelemetryModule requestModule;
This is pretty sketchy, not fully tested and has drawbacks. E.g. request collected is no longer augmented with functions details (invocation id, etc). To overcome it you need to get the real TelemetryConfiguration from the Function dependency injection container and use it to initialize the module.
It should be possible, but is blocked by some issue.
But even with the code above, you should get requests that respect incoming headers and other telemetry correlated to the request.
Also, when out-of-the-box support for correlation for http request is rolled out, this may break. So this is a hacky temporary solution, use it only if you absolutely have to.

Is it correct aspnetcore way? Service & Dependency Injection

so i want to create some service that accesses external API, and i want to cache common requests from the API inside of that service, it depends on 3 other services, but i want to give it its own instance of cache, MemoryDistributedCache might later be changed for something else
services.AddSingleton<ISomeApi, SomeApi>(provider => new SomeApi(
Configuration.Get<Options>(),
new MemoryDistributedCache(new MemoryCache(new MemoryCacheOptions())),
provider.GetService<ILogger<SomeApi>>()
));
now from my Controllers i can access the api via DI, it works nicely but im not sure if its some sort of an anti-pattern or if there are better ways of doing it
i mean the real problem is separating the internal cache, requesting
IDistributedMemory from one service would give me the same object as if i request it from another service, they must be separated
This sounds like something you could use a proxy or decorator pattern for. The basic problem is that you have a service that does some data access, and another service responsible for caching the results of the first service. I realize you're not using a repository per se, but nonetheless the CachedRepository pattern should work for your needs. See here:
http://ardalis.com/introducing-the-cachedrepository-pattern
and
http://ardalis.com/building-a-cachedrepository-via-strategy-pattern
You can write your cached implementation such that it takes in the actual SomeApi type in its constructor if you don't need that part of the design to be flexible.

Add request interceptors to specific calls

I'm currently trying to figure out which options does retrofit offer to add an interceptor only to specific calls.
Background & use cases
I'm currently using retrofit 1.9
The use case is pretty simple. Imagine a user who needs to login and get a session token. There is a call.
/**
* Call the backend and request a session token
*/
#POST("auht_endpoint")
Observable<Session> login(...);
All other calls will require a token from the above session in the form of a request header. In other words, all subsequent calls will have a header which provides the session token to the backend.
My question
Is there a simple way of adding this header only to specific calls through interceptors?
What I've tried so far
Obviously the easiest approach was to add the #Header annotation to the specific calls and providing the token as a parameter
I guess one can inspect the url in the request inside the interceptor. Not very flexible.
Create different rest adapters with different interceptors. I heard you should avoid creating several instances of the rest adapter for performance reasons.
Additional info
I'm not committed to interceptors, I would use other solutions
I've said I'm using retrofit 1.9, but I'd be also interested in a way to do it with retrofit 2.x
Please note this is not an answer, comment box was too small.
I've recently had this problem and I came up to the same possible solutions as you.
First of all I put aside double adapters - thats a last resort.
#Header field seems ok, bacause you explicitly define that this specific request needs authorization. However it's kinda boring to use.
Url inspection in interceptor looks "ugly", but I've decided to go with that. I mean if all requests from a one specific endpoint need that authorization header then what's the problem?
I had two other ideas:
Somehow dynamically replace/modify okHttpClient which is used with Retrofit. After some tests I figured that it's not possible.
Maybe create some custom annotation #AddAuthorizationHeader to the call definition, which will do everything for you, but I guess it wouldn't be possible either.
And in this matter Retrofit 2.x doesn't bring anything new.

Resources