so i want to create some service that accesses external API, and i want to cache common requests from the API inside of that service, it depends on 3 other services, but i want to give it its own instance of cache, MemoryDistributedCache might later be changed for something else
services.AddSingleton<ISomeApi, SomeApi>(provider => new SomeApi(
Configuration.Get<Options>(),
new MemoryDistributedCache(new MemoryCache(new MemoryCacheOptions())),
provider.GetService<ILogger<SomeApi>>()
));
now from my Controllers i can access the api via DI, it works nicely but im not sure if its some sort of an anti-pattern or if there are better ways of doing it
i mean the real problem is separating the internal cache, requesting
IDistributedMemory from one service would give me the same object as if i request it from another service, they must be separated
This sounds like something you could use a proxy or decorator pattern for. The basic problem is that you have a service that does some data access, and another service responsible for caching the results of the first service. I realize you're not using a repository per se, but nonetheless the CachedRepository pattern should work for your needs. See here:
http://ardalis.com/introducing-the-cachedrepository-pattern
and
http://ardalis.com/building-a-cachedrepository-via-strategy-pattern
You can write your cached implementation such that it takes in the actual SomeApi type in its constructor if you don't need that part of the design to be flexible.
Related
Our service allows the customers to enter their own domain to access the service, after setting the proper DNS records. Their domain/subdomain is pointed to our front end server which makes it more personalized for their users.
Since any customer can change their domain at any time, or even add new ones, we need to make a database call to verify the origin-header.
I've been trying to simply NOT handle cors as usual (app.UseCors() etc.), but instead using filters to resolve the "header-situation". I gave up on this approach because I realized that it just won't work (no endpoint with "OPTIONS" allowed). It also feels dirty, even if it would have worked.
I tried implementing ICorsPolicyProvider creating a custom CORS attribute but this seems to be a dead end since I can't find a way to inject the services I need to verify the origin.
Is there a way to verify the Origin-header while having access to needed services?
The solution is to implement ICorsService. The default implementation (CorsService) is easy to understand and is a good starting point.
In your services configuration, replace the call
services.AddCors();
with:
services.AddTransient<ICorsService, YourImplementationOfICorsService>()
.AddTransient<ICorsPolicyProvider, DefaultCorsPolicyProvider>();
Everything else is the same. The only drawback of this solution is that no scoped dependencies may be injected into your ICorsService implementation.
I've got a project written in Symfony 4 (can update to the latest version if needed). In it I have a situation similar to this:
There is a controller which sends requests to an external system. It goes through records in the DB and sends a request for every row. To do that there is an MagicApiConnector class which connects to the external system, and for every request there is a XxxRequest class (like FooRequest, BarRequest, etc).
So, something like this general:
foreach ( $allRows as $row ) {
$request = new FooRequest($row['a'], $row['b']);
$connector->send($request);
}
Now in order to do all the parameter filling magic, the requests need to access a service which is defined in Symfony's DI. The controller itself neither knows nor cares about this service, but the requests need it.
How can my request classes access this service? I don't want to set it as a dependency of the controller - I could, but it kinda seems awkward, as the controller really doesn't care about it and would only pass it through. It's an implementation detail of the request, and I feel like it shouldn't burden the users of the request with this boilerplate requirement.
Then again, sometimes you need to make a sacrifice in the name of the greater good, so perhaps this is one of those cases? It feels like I'm "going against the grain" and haven't grasped some ideological concept.
Added: OK, the full gory details, no simplification.
This all is happening in the context of two homebrew systems. Let's call them OldApp and NewApp. Both are APIs and NewApp is calling into the OldApp. The APIs are simple REST/JSON style. OldApp is not built on Symfony (mostly even doesn't use a framework), the NewApp is. My question is about NewApp.
The authentication for OldApp APIs comes in three different flavors and might get more in the future if needed (it's not yet dead!) Different API calls use different authentication methods; sometimes even the same API call can be used with different methods (depending on who is calling it). All these authentication methods are also homebrew. One uses POST fields, another uses custom HTTP headers, don't remember about the third.
Now, NewApp is being called by an Android app which is distributed to many users. Android app actually uses both NewApp and OldApp. When it calls NewApp it passes along extra HTTP headers with authentication data for OldApp (method 1). Thus NewApp can impersonate the Android app user for OldApp. In addition, NewApp also needs to use a special command of OldApp that users themselves cannot call (a question of privilege). Therefore it uses a different authentication mechanism (method 2) for that command. The parameters for that command are stored in local configuration (environment variables).
Before me, a colleague had created the scheme of a APIConnector and APICommand where you get the connector as a dependency and create command instances as needed. The connector actually performs the HTTP request; the commands tell it what POST fields and what headers to send. I wish to keep this scheme.
But now how do the different authentication mechanisms fit into this? Each command should be able to pass what it needs to the connector; and the mechanisms should be reusable for multiple commands. But one needs access to the incoming request, the other needs access to configuration parameters. And neither is instantiated through DI. How to do this elegantly?
This sounds like a job for factories.
function action(MyRequestFactory $requestFactory)
{
foreach ( $allRows as $row ) {
$request = $requestFactory->createFoo($row['a'], $row['b']);
$connector->send($request);
}
The factory itself as a service and injected into the controller as part of the normal Symfony design. Whatever additional services that are needed will be injected into the factory. The factory in turn can provide whatever services the individual requests might happen to need as it creates the request.
Ok, so the problem is:
I've got some 'order' entity, and it has 'status' property. On changing status, i wanted some other objects to be informed of this event, so i've decided to use Observer pattern. One of the observers notifies clients via email. Now i want to render Email text's from some of the twig templates. As i get from the Book, rendering templates in controllers are done with 'templating' service.
So the question as it follows: How can i access 'templating' service in my Observer class?
Specification:
I was advised, to implement my Observer as a service, but i'm not sure 'bout that. I've tried to solve this problem, and here is my options:
Use Registry. Solution that is straight and hard as rail. I guess it misses the whole point of DI and Service Container. Huge plus of this solution, is that i can access all common services from any point of my application.
To pass needed services from the context via constructor, or via setters. This is more like in Sf2 spirit. There comes another list of problems, which are not related to this question field.
Use observers as a service. I'm not really sure 'bout this option 'cos, in the book it is written, that service is a common functionality, and i don't think that observing entity with number of discrete properties is a common task.
I'm looking for a Sf2 spirit solution, which will be spread over whole project, so all answers with an explanation are appreciated.
As with any other service in a Symfony2 project, you can access it from within other classes through the dependency injector container. Basically what you would do is register your observer class as a service, and then inject the templating service into your observer service. See the docs for injecting services.
If you're not familiar with how Symfony handles dependency injection, I'd suggest reading that entire chapter of the documentation - it's very helpful. Also, if you want to find all the services that are registered for application, you can use the console command container:debug. You can also append a service name after that to see detailed info about the service.
Edit
I read your changes to the question, but still recommend going down the DI route. That is the Symfony2 spirit :) You're worried that your observer isn't common enough to be used as a service, but there's no hard rule saying "You must use this piece of code in X locations in order for it to be 'common'".
Using the DIC comes with another huge benefit - it handles other dependencies for you. Let's say the templating service has 3 services injected into itself. When using the DIC, you don't need to worry about the templating service's dependencies - they are handled for you. All you care about is telling it "inject the templating service into this other service" and Symfony takes care of all the heavy lifting.
If you're really opposed to defining your observer as a service, you can use constructor or setter injection as long as you're within a container-aware context.
I'm wondering which is better approach from performance point of view, is it better to use one web-service method to load data by passing Database Table name and keys or is it better to use separate method for each database table! knowing that i'm using .net asmx through ajax requests.
it's obvious that one method is better from OO perspective since it have one function type 'data loading' but what about performance? does IIS affected by that or not? also is it better to make multi web-services 'asmx files' or just one!
I really dont think that creating separate methods for handling data fetch different tables is necessary. The performance gain\loss that u r likely to experience by passing an additional table name param to your webservice call would be too small to even consider unless your table names are really huge, which i dont think is the case.
The only reason i would even consider doing some thing like this is if i have nothing else to do in terms of performance improvement or if being forced to do it ;-).
If you really want to optimize your request size try
serializing your input params using JSON (if you are not doing it already)
use a cookieless domain for your webservice
hope this helps
I don't think the service level should have any knowledge of database tables, just like you ideally don't want to see data access code in a controller action or ASPX's code behind.
Personally, I prefer to organize my services to match my domain model.
If I have Customer, Order, and Item classes, for example, I would have corresponding Customer.asmx, Order.asmx, and Item.asmx services to expose selected methods within those classes.
Services are typically responsible for exposing business functionality through a contract. I realize ASMX services really had not concept of "Contracts" in their broadest sense, however you think of it as a set of operations supported by the service. What is your goal here, do you want to expose tabular data as a service ?
Service technology on the Microsoft stack has come a long way from ASMX. Perhaps an obvious question, have you looked at WCF Data Services?
Links:
Exposing Your Data as a Service (WCF Data Services)
Getting Started with WCF Data Services
I'm currently working with web services that return objects such as a list of files e.g. File array.
I wanted to know whether its best practice to bind this type of object directly to my front end code for example a repeater/listview or whether to first parse it into my own list of "file class" e.g. customFiles[]
If the web service changes then it will break my front end code, however if I create my own CustomFile class, then i would only need to change my code in one place to fix the issue, but it just seems like a lot of extra work to create the same classes from a web service, i wanted to know what is the best practice for this type of work.
There is a delicate balancing act in properly encapsulating implementation details. Too little encapsulation is a maintenance nightmare as small changes in any area break the application. Too many layers is a different kind of maintenance headache altogether.
In this particular case I would create a small layer in your application to encapsulate the web service calls. This will ease your maintenance in both the application and the service as they will be loosely coupled.
It sounds like you have already answered your own problem. Best practice is to create your own custom class for the reasons you point out, but it is significant extra work.
If the webservice isn't likely to change then just use the existing classes, but if you need to cater for change then create your own.
Returning a class is fine as long as your client knows how to deserialize it. If it's truly a web service, where you don't have control over both ends of the conversation, it's more common to start with schemas for XML request and response streams. That decouples the client from the web service a bit more and allows any client that can send XML via HTTP and consume an XML response fair game.