Use Spring Data Couchbase to connect to different Couchbase clusters - spring-data-couchbase

I'm looking for a way to use Spring Data Couchbase to connect to two separate Couchbase clusters. Looking at the documentation and the implementation, it is not clear as how to do this and my concern is that there will be bean name conflicts if have two configurations that extend AbstractCouchbaseConfiguration. The only thing that looks close is to use the RepositoryOperationsMapping to specify different templates for different repositories. This however doesn't suit my needs as each of the Couchbase configurations will not be aware of the other. The only means i see of doing this now is to either not use AbstractCouchbaseConfiguration and setup my own beans or to override all the beans in AbstractCouchbaseConfiguration and provide new bean names. In each case, i would override the couchbase template bean name in the #EnableCouchbaseRepositories annotation. However, i'm not sure if this is going to work or if there is a better option.
Is this possible and if so, what is the best route for me to take?
Thank you

Could you elaborate on the use case that warranted the creation and connection to two separate clusters?
The best route here is still probably to define new Cluster, Bucket and CouchbaseTemplate beans, with custom names, in your existing AbstractCouchbaseConfiguration, and then use the configureRepositoryOperationsMapping() method in the conf. Basically like described in the doc about multiple buckets, but adding a second Cluster bean to the mix.

Related

Is it correct aspnetcore way? Service & Dependency Injection

so i want to create some service that accesses external API, and i want to cache common requests from the API inside of that service, it depends on 3 other services, but i want to give it its own instance of cache, MemoryDistributedCache might later be changed for something else
services.AddSingleton<ISomeApi, SomeApi>(provider => new SomeApi(
Configuration.Get<Options>(),
new MemoryDistributedCache(new MemoryCache(new MemoryCacheOptions())),
provider.GetService<ILogger<SomeApi>>()
));
now from my Controllers i can access the api via DI, it works nicely but im not sure if its some sort of an anti-pattern or if there are better ways of doing it
i mean the real problem is separating the internal cache, requesting
IDistributedMemory from one service would give me the same object as if i request it from another service, they must be separated
This sounds like something you could use a proxy or decorator pattern for. The basic problem is that you have a service that does some data access, and another service responsible for caching the results of the first service. I realize you're not using a repository per se, but nonetheless the CachedRepository pattern should work for your needs. See here:
http://ardalis.com/introducing-the-cachedrepository-pattern
and
http://ardalis.com/building-a-cachedrepository-via-strategy-pattern
You can write your cached implementation such that it takes in the actual SomeApi type in its constructor if you don't need that part of the design to be flexible.

How to introduce application-wide context object?

I need to make several properties accessible from application's business layer. Those are some ids and common settings. Most of them are valid only through request-response lifespan.
This is a web application (ASP.NET Web Forms to be specific) with dependency injection set up.
Currently those properties are passed through method parameters directly to business layer's services. That works but is not very efficient since:
sometimes parameters' values need to be passed deeper obscuring the readability a bit
some properties should be lazy resolved, and this should be done only once per request
retrieving properties which are resolved by touching a database can be confusing for new developers (there is not unified way of doing this)
some services are constructed by a factory which enriches them with some config parameters
I was thinking about introducing an application context interface, with an implementation in the main project, which would be created on every request. It could be injected to the services directly making them parametrized automatically and independently (services won't need the factory anymore).
Is it how this problem should be tackled or maybe there are some other options?
One option I don't like here is that it might bond the main particle with business layer which is not a perfect example of The Clean Architecture.
Id say you solution is a very common one - inject an 'application context' into your classes. One thing I would be careful of though is making sure you are following the Integration Segregation Principle (from SOLID). Dont just start making all your classes expect an application context instance. Instead, design interfaces that split the application context up, and have your classes expect them as dependencies. Your application context will then need to implement all the interfaces.
This is the correct way to do things as it decouples your classes from implementation. Really your classes don't care if their dependency is from one giant application context, they just care about specific methods implemented by it. This will make your code more robust as you will reduce the risk of breaking something if you change the implementation of the application context later on.
Why don't you use some dependency injection container? Your global settings and parameters can be registered to it as pseudo-singletons and then you will be able to neatly request them from any point inside your application.

Do AOP violate layered architecture for enterprise apps?

The question(as stated in the title) comes to me as recently i was looking at Spring MVC 3.1 with annotation support and also considering DDD for an upcoming project. In the new Spring any POJO with its business methods can be annotated to act as controller, all the concerns that i would have addressed within a Controller class can be expressed exclusively through the annotations.
So, technically i can take any class and wire it to act as controller , the java code is free from any controller specific code, hence the java code could deal with things like checking security , starting txn etc. So will such a class belong to Presentation or Application layer ??
Taking that argument even further , we can pull out things like security, txn mgmt and express them through annotations , thus the java code is now that of the domain object. Will that mean we have fused together the 2 layers? Please clarify
You can't take any POJO and make it a controller. The controller's job is get inputs from the browser, call services, prepare the model for the view, and return the view to dispatch to. It's still a controller. Instead of configuring it through XML and method overrides, you configure it through annotations, that's all.
The code is very far from being free from any controller specific code. It still uses ModelAndView, BindingResult, etc.
I'll approach the question's title, regarding AOP:
AOP does not violate "layered architecture", specifically because by definition it is adding application-wide functionality regardless of the layer the functionality is being used in. The canonical AOP example is logging: not a layer, but a functionality--all layers do logging.
To sort-of tie in AOP to your question, consider transaction management, which may be handled via Spring's AOP mechanism. "Transactions" themselves are not specific to any layer, although any given app may only require transactions in only a single layer. In that case, AOP doesn't violate layered architecture because it's only being applied to a single layer.
In an application where transactions may cross layers IMO it still doesn't violate any layering principles, because where the transactions live isn't really relevant: all that matters is that "this chunk of functionality must be transactional". Even if that transaction spans several app boundaries.
In fact, I'd say that using AOP in such a case specifically preserves layers, because the TX code isn't mechanically reproduced across all those layers, and no single layer needs to wonder (a) if it's being called in a transactional context, or (b) which transactional context it's in.

How to access 'templating' service not in controller

Ok, so the problem is:
I've got some 'order' entity, and it has 'status' property. On changing status, i wanted some other objects to be informed of this event, so i've decided to use Observer pattern. One of the observers notifies clients via email. Now i want to render Email text's from some of the twig templates. As i get from the Book, rendering templates in controllers are done with 'templating' service.
So the question as it follows: How can i access 'templating' service in my Observer class?
Specification:
I was advised, to implement my Observer as a service, but i'm not sure 'bout that. I've tried to solve this problem, and here is my options:
Use Registry. Solution that is straight and hard as rail. I guess it misses the whole point of DI and Service Container. Huge plus of this solution, is that i can access all common services from any point of my application.
To pass needed services from the context via constructor, or via setters. This is more like in Sf2 spirit. There comes another list of problems, which are not related to this question field.
Use observers as a service. I'm not really sure 'bout this option 'cos, in the book it is written, that service is a common functionality, and i don't think that observing entity with number of discrete properties is a common task.
I'm looking for a Sf2 spirit solution, which will be spread over whole project, so all answers with an explanation are appreciated.
As with any other service in a Symfony2 project, you can access it from within other classes through the dependency injector container. Basically what you would do is register your observer class as a service, and then inject the templating service into your observer service. See the docs for injecting services.
If you're not familiar with how Symfony handles dependency injection, I'd suggest reading that entire chapter of the documentation - it's very helpful. Also, if you want to find all the services that are registered for application, you can use the console command container:debug. You can also append a service name after that to see detailed info about the service.
Edit
I read your changes to the question, but still recommend going down the DI route. That is the Symfony2 spirit :) You're worried that your observer isn't common enough to be used as a service, but there's no hard rule saying "You must use this piece of code in X locations in order for it to be 'common'".
Using the DIC comes with another huge benefit - it handles other dependencies for you. Let's say the templating service has 3 services injected into itself. When using the DIC, you don't need to worry about the templating service's dependencies - they are handled for you. All you care about is telling it "inject the templating service into this other service" and Symfony takes care of all the heavy lifting.
If you're really opposed to defining your observer as a service, you can use constructor or setter injection as long as you're within a container-aware context.

Why use facade pattern in EJB?

I've read through this article trying to understand why you want a session bean in between the client and entity bean. Is it because by letting the client access entity bean directly you would let the client know exactly all about the database?
So by having middleman (the session bean) you would only let the client know part of the database by implementing the business logic in some certain way. So only part of the database which is relevant to the client is only visible. Possibly also increase the security.
Is the above statement true?
Avoiding tight coupling between the client & the business objects, increasing manageability.
Reducing fine-grained method invocations, leads to minimize method invocation calls over the network, providing coarse-grained access to clients.
Can have centralized security & transaction constraints.
Greater flexibility & ability to cope with changes.
Exposing only required & providing simpler interface to the clients, hiding the underlying complexity and inner details, interdependencies between business components.
The article you cite is COMPLETELY out of date. Check the date, it's from 2002.
There is no such thing anymore as an entity bean in EJB (they are currently retained for backwards compatibility, but are on the verge of being purged completely). Entity beans where awkward things; a model object (e.g. Person) that lives completely in the container and where access to every property of it (e.g. getName, getAge) required a remote container call.
In this time and age, we have JPA entities that are POJOs and contain only data. Don't confuse a JPA entity with this ancient EJB entity bean. They sound similar but are completely different things. JPA entities can be safely send to a (remote) client. If you are really concerned that the names used in your entity reveal your DB structure, you could use XML mapping files instead of annotations and use completely different names.
That said, session beans can still perfectly be used to implement the Facade pattern if that's needed. This pattern is indeed used to give clients a simplified and often restricted view of your system. It's just that the idea of using session beans as a Facade for entity beans is completely outdated.
It is to simplify the work of the client. The Facade presents a simple interface and hides the complexity of the model from the client. It also makes it possible for the model to change without affecting the client, as long as the facade does not change its interface.
It decouples application logic with the business logic.
So the actual data structures and implementation can change without breaking existing code utilizing the APIs.
Of course it hides the data structure from "unknown" applications if you expose your beans to external networks

Resources