Does every business logic needs to be a service in symofny2? - symfony

I am learning Symfony2 and I find the dependency injection stuff and service container interesting.
But I want to know if I should make all my logic in the service container and then call it form controller or I could use the old way ( make classes and instantiate them when I need ) ?

Here's what says the official documentation on this topic,
"The advantage of thinking about "services" is that you begin to think about separating each piece of functionality in your application into a series of services. Since each service does just one job, you can easily access each service and use its functionality wherever you need it. Each service can also be more easily tested and configured since it's separated from the other functionality in your application. This idea is called service-oriented architecture and is not unique to Symfony2 or even PHP. Structuring your application around a set of independent service classes is a well-known and trusted object-oriented best-practice. These skills are key to being a good developer in almost any language."
Put another way, Turn the common logic (used globaly in your application) into services is a good practice.

From my point of view, the role of the container is to make available to the whole application all the common logic.
It acts as a substitute to all global arrays ($_REQUEST, $_GLOBALS, etc...), to global variables, global constants (as opposed to class constants, which remain useful), global functions and all such things that makes PHP a rather messy language. It aims at making PHP the full OOP language it wants to be.
It incites one to avoid procedural programming that PHP still allows, and that is good.
In short, you can still instanciate and use classes the usual way. But whenever you spot two pieces of code alike, or using a same logic, or reusing the same information, that generally means you want to use the service container to help reusing the code.

Related

Clean Architecture (DDD) Why are domain objects (DB Entitites) and DbContext in separate projects?

I understand the need for abstraction and separating concerns and unit tests, however, it seems to me that separating entities and context into 2 projects is slight overengineering?
I could be missing something really, but is this because you want to be open for different ORM-s?
Much thanks for the clarification.
The main reason I prefer to have Infrastructure in a separate project, rather than just a separate folder, from the domain model (Core project) is simple: enforcing my design via the compiler.
I have a design rule, which is basically the Dependency Inversion Principle. Don't depend on low level implementations (such as those found in Infrastructure), instead depend on abstractions (interfaces). Also, don't have your abstractions depend on details; have details depend on abstractions. The details of how and which infrastructure is being used for a given abstraction are in the Infrastructure service implementations.
Abstractions say what; implementations say how.
What: I need to send an email.
ISendEmail interface
How: I want to do it using the SMTP protcol
SmtpEmailSender class (implements ISendEmail)
How: I want to do it using a SendGrid API
SendGridEmailSender class (implements ISendEmail)
So, in a single project, how would you ensure that the implementations depend on the interfaces, and not vice versa?
How would you ensure your domain classes didn't directly reference or use Infrastructure types?
I'm not aware of a way to do this.
But if you put them in separate projects, and you have the implementation details project depend on the abstractions-and-models project, you now have solved the problem. The compiler WILL NOT ALLOW the Core project to reference anything in the Infrastructure project, because it would create a circular dependency.
This constraint helps developers do the right thing and keeps them falling into the pit of success even if they don't completely grok how the dependency inversion principle works or why it's important.
And I've never found 3 projects (Core/Infra/UI) to be overengineering for any non-demo app I've built for real work. It's only 3 projects.

ASP.NET infrastructure in MediatR handlers

I prefer to keep my handlers free from ASP.NET infrastructure that is very hard to test (yes, even in ASP.NET Core). But sometimes it happens and you have a dependency like UserManager (I'd like to know one day why it's not an interface), HttpContext, etc. and unit-tests are turned into a mocking-hell.
I tried using integration testing to deal with it by creating a TestServer and having all the ASP.NET infrastructure initialized for every api call. It works quite well but sometimes seems like an overkill if I want to test simple logic of my handler. And while it solves technical problem of mocking ASP.NET infrastructure, it keeps architectural problem (if you consider it so) of having ASP.NET infrastructure into your handlers.
I'd like to know what are the recommended approaches to deal with it?
I feel your pain. I stumbled across a fantastic blog post from Jimmy Bogard that handles this problem by using what Martin Fowler calls Subcutaneous Tests. I will leave the deep explanation to those experts but in a nutshell subcutaneous tests simply avoid all the difficult to test aspects of the UI.
Shameless plug: I am currently in the process of writing up a wiki that demonstrates these patterns in a sample end-to-end project on github. It's not difficult to follow but is probably too much code to post for a SO answer.
To Summarize:
If you are using MediatR correctly your controllers should be very thin which makes testing them pointless.
What you want to test are your handlers.
However, you want to test your handlers as part of your real world pipeline.
To Solve:
Wrap the http request in a transaction.
Build a test fixture that mimics the applications Startup.cs
Setup a test db server to execute queries and commands against but also is reset after each test.
That's basically it. Everytime you run an integration test against one of your handlers:
The hosting environment is mocked but your application is started up in a real world test.
Your query or command is wrapped in a transaction mimicking your DbContext.
The handler is executed against a real database and then reset.
I would add more code examples to my answer but between the blog post and the wiki I provided, it is much easier to follow the code examples there.
Edit 8/2021:
Stick with the source. Jimmy Bogard keeps the contoso university project current on his github page. Another great and a little more advanced example is the modular monolith project by Kamil Grzybek. That also is updated regularly on his github page.
Mediatr or no, you should always try to have only very basic pass this along logic in your controllers and call injected business logic classes from there to do the actual work. As you inject them with interfaces to this business logic, your controllers' dependencies are easily mocked in your unit tests, and your tests can focus on if they implement those interfaces properly and do only the basic work of routing input/output. And your actual business logic can be tested even easier.
For those classes that are static, for instance for reading the web.config settings, one strategy that I like a lot is make an interfaced wrapper class around them. While ConfigurationManager is static, I can still just write a regular class with an interface that I put methods or properties on to read a specific setting (preferably semantically named) from the Configuration Manager. Now I can easily mock any configured setting (or absence of it) in my test by just mocking the interface and setting up different return values.
I'd say it depends on the level of confidence you want to get in the end. If you want to make sure the whole system works as expected, then integration tests using a TestServer are probably the way to go.
One advantage of MediatR, though, is it allows you to decouple your business logic from the application using it, which is why at the very top level, let's say in controllers, there's no logic but just a delegation to the mediator.
That being said, you're right that sometimes your logic needs information from the hosting application. An example would be the user making the request, which is accessible in the HTTP context.
In that case, if you want to avoid having to set up a test HTTP server to test your logic works, you could represent that information in an abstraction and your handler would then take a dependency on that abstraction. Your tests could then mock that dependency while using the real system for everything else.
Does that make sense?

Cascading microservices using Meteor

I've been looking into scaling Meteor, and had an idea by using the Meteor Cluster package;
Create a super-service*, which the user connects to, containing general core packages to be used by every micro-service (api, app, salesSite, etc. would make use of its package),
The super-service then routes to the appropriate micro-service (e.g., the app), providing it with the functionality of its own packages.
(* - as in super- and sub-, not that it's awesome... I mean it is but...)
The idea being that I can cascade each service as a superset of the super-service. This would also allow me to cleverly inherit functionality for other services in a cascading service style. E.g.,
unauthedApp > guestApp > userApp > modApp > adminApp,
for the application, where the functionality of the previous service are inherited to the preceding service (e.g., the further right along that chain, the more extra functionality is added and inherited).
Is this possible?
EDIT: If possible, is there a provided example of how to implement such a pattern using micro-services?
[[[[[ BIG EDIT #2: ]]]]]
Think I'm trying to make a solution fit the problem, so let me re-explain so this question can be answered based on the issue rather than the solution I'm trying to implement.
Basically, I want to "inherit" (for lack of a better word) the packages depended on needed functionality, so that no code is unnecessarily sent through the wire.
So starting with the core packages, which has libraries I want all of my services to have, I then want to further "add" the functionality as needed. Then I want to add page packages if serving a page-based service (instead of, say, the API service, which doesn't render pages), then the appropriate role-based page packages, etc., until the most specific packages are added.
My thought was that I could make the services chain in such a way that I could traverse through from the most generic to most specific service, and that would finally end with a composition of packages from multiple services. So, for e.g., the guestApp, that might be the core packages + generic page packages + generic app packages + unauthApp packages + guestApp packages, so no unneccessary packages are added.
Also with this imaginary pattern I'm describing, I don't need to add all my core packages to each microservice - I can deal with them all within the core package right at the top of the package traversal I've discussed above and not have to worry about forgetting to add the packages to the "inherited" packages.
Hope my reasoning here makes sense, and I hope you guys know of a best practice for doing this. Thank you!
Short answer:
Yes! That's a good use to a microservice architecture.
Long answer:
Microservices don't necessarily provide you an inheritence mechanism as in OOP. You should consider microservices as independent "functions" which take in an input and respond with an output/action. Any microservice can depend on another to complete its own task.
And then, you "compose" necessary microservices in order to achieve the final output/action.
You can have one or few web facing "frontend" services that use a mix of few other backend microservices whose ports are not open to the public network.
The drawback with a microservice would be its "minimum footprint". The idea with microservices is around some main benefits:
Separate core services so that they can be "maintained" independently
Separate core services so that they can be "replaced" independently
Separate core services so that they can be "scaled" independently
But then, each microservice, being a node/meteor app, will have its minimum cpu/ram footprint even when they are just idle and waiting for a connection.
Furthermore, managing a single monolithic app, or just a few "largish" services is much easier, from a devops standpoint, than managing tens of individual deployments.
So with all engineering decisions, the right answer would imply some kind of "balance".
Edit: reference to inheritence
As per the OP's comment, the microservices can indeed be referenced from a parent code as either functions or classes and be composed (functions) or inherited from (classes) because after all the underlying functionality are DDP endpoints.
If you are using the cluster package from meteorhacks
// create a connection to your microservice
var someService = Cluster.discoverConnection("someService");
// call a normal meteor method from that service
var resultFromSomeService = someService.call("someMethodFromSomeService");
So as with any piece of javascript code, you can wrap the above piece of code in a function or a class with its constructor and all and inherit from it, exposing its interfaces as you desire.

Handling Dependency Injections - Where does the logic go?

I'm working on an ASP.Net website along with a supporting Class Library for my Business Logic, Data Access code, etc.
I'm EXTREMELY new and unfamiliar with the Unity Framework and Dependency Injection as a whole. However, I've managed to get it working by following the source code for the ASP.NET 3.5 Portal Starter Kit on codeplex. But herein lies the problem:
The Class Library is setup with Unity and several of my classes have [Dependency] attributes on their properties (I'm exclusively using property setter injections for this). However, the Global.asax is telling Unity how to handle the injections....in the Class Library.
Is this best practice or should the Class Library be handle it's own injections so that I can re-use the library with other websites, webapps or applications? If that is indeed the case, where would the injection code go in this instance?
I'm not sure how clear the question is. Please let me know if I need to explain more.
Though not familiar with Unity (StructureMap user) The final mappings should live in the consuming application. You can have the dll you are using define those mappings, but you also want to be able to override them when needed. Like say you need an instance of IFoo, and you have one mapped in your Class Library, but you've added a new one to use that just lives in the website. Having the mappings defined in the site allows you to keep things loosely coupled, or else why are you using a DI container?
Personally I try and code things to facilitate an IOC container but never will try and force an IOC container into a project.
My solution breakdown goes roughly:
(Each one of these are projects).
Project.Domain
Project.Persistence.Implementation
Project.Services.Implementation
Project.DIInjectionRegistration
Project.ASPNetMVCFrontEnd (I use MVC, but it doesn't matter).
I try to maintain strict boundaries about projects references. The actual frontend project cannot contain any *.Implementation projects directly. (The *.implementation projects contain the actual implementations of the interfaces in domain in this case). So the ASPNetMVCFrontEnd has references to the Domain and the DIInjectionWhatever and to my DI container.
In the Project.DIInjectionWhatever I tie all the pieces together. So this project has all the references to the implementations and to the DI framework. It contains the code that does the registering of components. Autofac lets me breakdown component registration easily, so that's why I took this approach.
In the example here I don't have any references to the container in my implementation projects. There's nothing wrong with it, and if your implementation requires it, then go ahead.

What are the benefits of Spring Actionscript considering Dynamic Proxies and Reflection is limited

What are the benefits of Spring Actionscript considering Dynamic Proxies are not possible in the current version of Actionscript and Reflection is quite limited.
So for example I could specify my object creation in an XML application context, but why would I do that when I can simply specify that in code, and hence take advantage of static type checking etc.
It is by no means my intent to belittle the work done on Spring Actionscript but more to find an application for it in my projects.
Besides XML configuration, Spring ActionScript also supports MXML configuration. The type of config (XML, MXML) depends on the use cases your application needs to support. For the reasons you mention, it makes perfect sense to configure most of the context in MXML, but I would encourage you to externalize the config of service endpoints in every case.
In a past project we opted for XML config since the configuration was generated at runtime when a user logged on to the application. Depending on the user credentials, different endpoints and various different settings were used. We could not have done this elegantly with static MXML configs.
Both config types have their strengths and weaknesses, and it's up to you to decide what type you want to use. I think we could even support a mixture of MXML and XML quite easily actually if that would make sense. As soon as we have Dynamic Proxies and class loading, XML config will make a lot more sense.
I would agree with Sean in the general sense that trying to force Flex inside of the Java box is generally a bad idea. As many similarities as there are, Flex is not Java.
That being said, there are plenty of reasons why you might want to have some of your configuration in an external XML file, not the least of which is in the use case of configuring your service destinations and endpoints, where you may have a need to be able to change the endpoint URI without having to recompile your application.
There are several projects available that are simply misguided ports of philosophies from other platforms. Whenever starting in on a new platform, I think the best thing to do is figure out how people are effectively developing and go from there.
I say all of that because I think all of the java-esque frameworks for flex/flash leave you worse off than you started. You do need dependency injection, but there are good as3/mxml-friendly frameworks for that (Mate, Swiz). There is absolutely no point in using xml when you can use mxml, which is strongly typed.

Resources