How to access dependency injection container in Symfony 4 without actual injection? - symfony

I've got a project written in Symfony 4 (can update to the latest version if needed). In it I have a situation similar to this:
There is a controller which sends requests to an external system. It goes through records in the DB and sends a request for every row. To do that there is an MagicApiConnector class which connects to the external system, and for every request there is a XxxRequest class (like FooRequest, BarRequest, etc).
So, something like this general:
foreach ( $allRows as $row ) {
$request = new FooRequest($row['a'], $row['b']);
$connector->send($request);
}
Now in order to do all the parameter filling magic, the requests need to access a service which is defined in Symfony's DI. The controller itself neither knows nor cares about this service, but the requests need it.
How can my request classes access this service? I don't want to set it as a dependency of the controller - I could, but it kinda seems awkward, as the controller really doesn't care about it and would only pass it through. It's an implementation detail of the request, and I feel like it shouldn't burden the users of the request with this boilerplate requirement.
Then again, sometimes you need to make a sacrifice in the name of the greater good, so perhaps this is one of those cases? It feels like I'm "going against the grain" and haven't grasped some ideological concept.
Added: OK, the full gory details, no simplification.
This all is happening in the context of two homebrew systems. Let's call them OldApp and NewApp. Both are APIs and NewApp is calling into the OldApp. The APIs are simple REST/JSON style. OldApp is not built on Symfony (mostly even doesn't use a framework), the NewApp is. My question is about NewApp.
The authentication for OldApp APIs comes in three different flavors and might get more in the future if needed (it's not yet dead!) Different API calls use different authentication methods; sometimes even the same API call can be used with different methods (depending on who is calling it). All these authentication methods are also homebrew. One uses POST fields, another uses custom HTTP headers, don't remember about the third.
Now, NewApp is being called by an Android app which is distributed to many users. Android app actually uses both NewApp and OldApp. When it calls NewApp it passes along extra HTTP headers with authentication data for OldApp (method 1). Thus NewApp can impersonate the Android app user for OldApp. In addition, NewApp also needs to use a special command of OldApp that users themselves cannot call (a question of privilege). Therefore it uses a different authentication mechanism (method 2) for that command. The parameters for that command are stored in local configuration (environment variables).
Before me, a colleague had created the scheme of a APIConnector and APICommand where you get the connector as a dependency and create command instances as needed. The connector actually performs the HTTP request; the commands tell it what POST fields and what headers to send. I wish to keep this scheme.
But now how do the different authentication mechanisms fit into this? Each command should be able to pass what it needs to the connector; and the mechanisms should be reusable for multiple commands. But one needs access to the incoming request, the other needs access to configuration parameters. And neither is instantiated through DI. How to do this elegantly?

This sounds like a job for factories.
function action(MyRequestFactory $requestFactory)
{
foreach ( $allRows as $row ) {
$request = $requestFactory->createFoo($row['a'], $row['b']);
$connector->send($request);
}
The factory itself as a service and injected into the controller as part of the normal Symfony design. Whatever additional services that are needed will be injected into the factory. The factory in turn can provide whatever services the individual requests might happen to need as it creates the request.

Related

HonoJs: Best way to start a Twitter SDK connection in Hono with CloudFlare?

In an old-school server environment, you initialize an SDK (like the Twitter SDK) when the server starts up, using dotenv to read secrets and tokens from your .env file like so:
import dotenv from 'dotenv';
import {Client} from 'twitter-api-sdk';
dotenv.config();
const twitterClient = new Client (TWITTER_SECRET_INFO);
And then you would use the twitterClient object to get data in one of the route handlers.
What's the best practice for initializing something like the twitter client in Hono with Cloudflare?
In the old service worker framework, I could have treated the secret info as a global environment variable much like in Node/Express, but in the new module worker code you have to access the environment variables as a parameter passed to a function call. It looks like Hono manages this by passing contexts to methods like .use/.get/.post.
Ideally, though, I wouldn't reinitialize the twitter connection on every request, especially since I'm just getting public info with a token, not dealing with any user login/password info.
Is there any way to do this in Hono/Cloudflare, or do I have to initialize the Twitter client middle ware each request? I looked at the Hono class constructer, but from what I can tell, all it does is take a router config object.
And from what I can tell of the cloudflare docs, module workers have the same issue. Whereas constants in a service worker were declared outside the route handler, it looks like everything in a module worker is declared inside a fetch handler. Is there anyway to initialize once during the life of the worker and not for each request?
In principle you could initialize the client on the first request:
let twitterClient = null;
export default {
async fetch(req, env, ctx) {
if (!twitterClient) {
twitterClient = new Client(env.TWITTER_SECRET_INFO);
}
// ... normal code ...
}
}
That said, though, is creating a new client actually expensive?
Constructing the client does not "initialize a connection". The client presumably makes requests by calling fetch(). The fetch() API doesn't expose any way to control the underlying connections used; each fetch() operates effectively independently. But, the Workers Runtime will automatically reuse connections behind the scenes, when possible. It could even reuse the same connection for two completely unrelated Workers, if they are contacting the same destination host. So it may be that even creating a new client with every request, you're already getting good connection reuse.
That said, perhaps the client has to do some sort of key exchange upfront, e.g. exchanging a long-lived refresh token for an access token. That is annoying to have to repeat on every request. So in that sense, maybe caching it in a global helps.
However, note that Workers creates LOTS of instances of your Worker around the world. You may find if you curl your Worker several times in a row, each request lands on a different instance. You may find that caching in global state does not actually have much impact unless you have a large amount of traffic.
Caching may be more effective if you use the Cache API to store cached values into the colo-wide cache. Unfortunately, client libraries designed for Node environments may not provide the right hooks to do this.
One final note: Note that putting live resources (things that are not just plain data structures) into the global scope can be dangerous on Workers, because in general a Promise created on behalf of one incoming request cannot be awaited in the context of some other request. So if that twitter client does do some sort of upfront key exchange and tries to have all requests wait for that to complete, you may find that if you receive multiple requests at once before the initial key exchange finishes, all except the first request end up failing. To be honest, I would recommend creating a new client for every request unless you see a measurable performance problem from this.

Grails 4 Async with Database Operations

My Grails 4.0.10 app needs to call an external service. The call may take up to 3 minutes, so it has to be async'ed. After reading the doco I wrote a non-blocking service method to perform the call using a Promise without too much trouble.
The documentation describes how async outcome can be displayed.
In my case the outcome affects the database. I must create new domain objects, modify existing domain objects and persist the result in the onComplete closure. The doco is rather quiet on how to do this.
These are my assumptions about the onComplete closure. My question is: Are the assumptions valid? Is this the proper way to do it?
No injected stuff is available, neither services nor (for example) log -- things you normally expect in a service
Database logic must be enclosed first within Tenants.withId if multitenancy is used, and then within withTransaction
withTransaction is prefixed with a domain name. However, other domains may freely be manipulated and persisted in the same closure
Domain instances picked up before the async call may be attached to the current session like this instance.attach() and then modified and saved
If logging is needed, create a new log instance

HttpClient, seems so hard to use it correctly

just another question about the correct usage of HttpClient, because unless i am missing something, I find contradicting information about HttpClient in Microsoft Docs. These two links are the source of my confusion:
https://learn.microsoft.com/en-us/azure/architecture/antipatterns/improper-instantiation/#how-to-fix-the-problem
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/http-requests?view=aspnetcore-3.1#typed-clients
First one states that the best approach is a shared singleton HttpClient instance, the second that the AddHttpClient<TypedClient>() registers the service as transient, and more specifically (copied from that URL):
The typed client is registered as transient with DI. In the preceding
code, AddHttpClient registers GitHubService as a transient service.
This registration uses a factory method to:
Create an instance of HttpClient.
Create an instance of GitHubService, passing in the instance of HttpClient to its constructor.
I was always using AddHttpClient<TypedClient>() and feeling safe, but now i am puzzled again... And making things worse, I found this github issue comment by #rynowak which states:
If you are building a library that you plan to distribute, I would
strongly suggest that you don't take a dependency on
IHttpClientFactory at all, and have your consumers pass in an
HttpClient instance.
Why this is important to me? Because, I am in a process of creating a library that mainly does two things:
Retrieve an access token from a token service (IdentityServer4)
Use that token to access a protected resource
And I am following the typed clients approach described in the link 2 above:
//from https://github.com/georgekosmidis/IdentityServer4.Contrib.HttpClientService/blob/master/src/IdentityServer4.Contrib.HttpClientService/Extensions/ServiceCollectionExtensions.cs
services.AddHttpClient<IIdentityServerHttpClient, IdentityServerHttpClient>()
.SetHandlerLifetime(TimeSpan.FromMinutes(5));
Any advises or examples of how a concrete implementation based on HttpClient looks like, will be very welcome.
Thank you!

Application Insights end-to-end multi component logging in Azure Functions

End-to-end logging between multiple components using Application Insights (AI) is based on a hierarchical Request-Id header. So each component is responsible to honer a possible incoming Request-Id. To get the full end-to-end hierarchical flow correct in Application Insights the Request-Id header needs to be used as the AI Operation.Id and Operation.ParentId (as described here).
But when making a request with a Request-Id header to a Azure Function using a HttpTrigger binding for example (Microsoft.NET.Sdk.Functions 1.0.24) with integrated Application Insights configured (as described here) a completely new Operation.Id is created and used - causing the whole flow in AI to be lost. Any ideas on how to get around this?
Setting up a separate custom TelemetryClient might be an option. But seems to require a lot of configuration to get the full ExceptionTrackingTelemetryModule and DependencyTrackingTelemetryModule right - especially when using Functions v2 and Core (ref to AI config). Anyone go that successfully working?
This is not yet supported by Functions but should start working sometime early next year.
If you want to hack it through, you can add a reference to ApplicationInsights SDK for AspNetCore (v 2.4.1) and configure RequestTrackingTelemetryModule.
static Function1()
{
requestModule = new RequestTrackingTelemetryModule();
requestModule.Initialize(TelemetryConfiguration.Active);
}
private static RequestTrackingTelemetryModule requestModule;
This is pretty sketchy, not fully tested and has drawbacks. E.g. request collected is no longer augmented with functions details (invocation id, etc). To overcome it you need to get the real TelemetryConfiguration from the Function dependency injection container and use it to initialize the module.
It should be possible, but is blocked by some issue.
But even with the code above, you should get requests that respect incoming headers and other telemetry correlated to the request.
Also, when out-of-the-box support for correlation for http request is rolled out, this may break. So this is a hacky temporary solution, use it only if you absolutely have to.

Is it correct aspnetcore way? Service & Dependency Injection

so i want to create some service that accesses external API, and i want to cache common requests from the API inside of that service, it depends on 3 other services, but i want to give it its own instance of cache, MemoryDistributedCache might later be changed for something else
services.AddSingleton<ISomeApi, SomeApi>(provider => new SomeApi(
Configuration.Get<Options>(),
new MemoryDistributedCache(new MemoryCache(new MemoryCacheOptions())),
provider.GetService<ILogger<SomeApi>>()
));
now from my Controllers i can access the api via DI, it works nicely but im not sure if its some sort of an anti-pattern or if there are better ways of doing it
i mean the real problem is separating the internal cache, requesting
IDistributedMemory from one service would give me the same object as if i request it from another service, they must be separated
This sounds like something you could use a proxy or decorator pattern for. The basic problem is that you have a service that does some data access, and another service responsible for caching the results of the first service. I realize you're not using a repository per se, but nonetheless the CachedRepository pattern should work for your needs. See here:
http://ardalis.com/introducing-the-cachedrepository-pattern
and
http://ardalis.com/building-a-cachedrepository-via-strategy-pattern
You can write your cached implementation such that it takes in the actual SomeApi type in its constructor if you don't need that part of the design to be flexible.

Resources