Let's assume that we have a Search-Service
service Search {
rpc Search (SearchRequest) returns (SearchReply) {}
}
message SearchRequest {
string query = 1;
}
message SearchReply {
repeated string message = 1;
}
Now, let's consider we multiple search engines that are all supposed to implement that interface to provide search.
The wall I am hitting is that gRPC only allow a singleton of the Search service behind a single IP:PORT pair. So, the only way to integrate multiple search engines in an environment is to put each behind a different IP:PORT, it's impossible to have them on the same port.
You can't have multiple instances of the same service on the same server. There are three main ways to solve the problem, depending on the flavor of your problem:
Combine results
Include a request parameter
Use different service names
If the "multiple search engines" are semantically equivalent, then a separate method isn't necessary and instead respond with aggregated results.
If the "multiple search engines" are data-dependent, then include another parameter in SearchRequest, like string dataset = 2;.
Otherwise make separate services: ImageSearch and WebSearch.
Related
This probably ins't typical setup, but due to higher decisions we endup having multiple kafka clusters within one app, multiple topics per each, and each might have different serializing strategy. Json/avro. And avro might be with confluent schema registry or using single object encoding.
Well I got it working somehow, by building my own abstractions and registry which analyzes the configuration and creates most of stuff manually, but I feel I needed to repeat stuff like topic names, schema registry url on several places multiple times just to create all needed beans. Ugly as hell.
I'd like to ask, if there is some better way and support for this I just might have overlooked.
I need to create N representations of kafka clusters, configuring it once. Configure topics respective to given kafka cluster, configure confluent schema registry for topics where applicable etc, so that I can create instance of Avro schema file, send it to KafkaTemplate and it will work.
It depends on the complexity and how much different the configurations are, as to whether this will help, but you can override individual Kafka properties (such as bootstrap servers, deserializers, etc on the #KafkaListener and in each KafkaTemplate.
e.g.
#KafkaListener(id = "two", topics = "two",
properties = "value.deserializer:org.apache.kafka.common.serialization.ByteArrayDeserializer")
public void listen2(byte[] in) {
System.out.println("2: " + new String(in));
}
EDIT
The overrides can be externalized - like so:
#KafkaListener(id = "so67959209", topics = "so67959209",
properties = "${consumer.overrides.one}")
public void listen(byte[] in) {
}
and
consumer.overrides.one=bootstrap.servers:localhost:9092\n \
key.deserializer:org.apache.kafka.common.serialization.ByteArrayDeserializer\n \
value.deserializer:org.apache.kafka.common.serialization.ByteArrayDeserializer
IMPORTANT The overrides are raw Kafka property names, not the bootified versions - e.g. bootstrap.servers Vs. Boot's bootstrap-servers.
I am curious to understand what the best practice approach is when using the Axon Framework to validate that an email field is unique to a Set of emails for a Contact Aggregate.
Example setup
ContactCreateCommand {
identifier = '123'
name = 'ABC'
email = 'info#abc.com'
}
ContactAggregate {
ContactAggregate(ContactCreateCommand cmd) {
//1. cannot validate email
AggregateLifecycle.apply(
new ContactCreatedEvent(//fields ... );
);
}
}
From my understanding of how this might be implemented, I have identified a number of possible ways to handle this, but perhaps there are more.
1. Do nothing in the Aggregate
This approach imposes that the invoker (of the command) does a query to find Contacts by email prior to sending the command, allowing for some milliseconds where eventual consistency allows for duplication.
Drawbacks:
Any "invoker" of the command would then be required to perform this validation check as its not possible to do this check inside the Aggregate using an Axon Query Handler.
Duplication can occur, so all projections based from these events need to handle this duplication somehow
2. Validate in a separate persistence layer
This approach introduces a new persistence layer that would validate uniqueness inside the aggregate.
Inside the ContactAggregate command handler for ContactCreateCommand we can then issue a query against this persistence layer (eg. a table in postgres with a unique index on it) and we can validate the email against this database which contains all the sets
Drawbacks:
Introduces an external persistence layer (external to the microservice) to guarantee uniqueness across Contacts
Scaling should be considered in the persistence layer, hitting this with a highly scaled aggregate could prove a bottleneck
3. Use a Saga and Singleton Aggregate
This approach enhances the previous setup by introducing an Aggregate that can only have at most 1 instance (e.g. Target Identifier is always the same). This way we create a 'Singleton Aggregate' that is responsible only to encapsulate the Set of all Contact Email Addresses.
ContactEmailValidateCommand {
identifier = 'SINGLETON_ID_1'
email='info#abc.com'
customerIdentifier = '123'
}
UniqueContactEmailAggregate {
#AggregateIdentifier
private String identifier;
Set<String> email = new HashSet<>();
on(ContactEmailValidateCommand cmd) {
if (email.contains(cmd.email) == false) {
AggregateLifecycle.apply(
new ContactEmailInvalidatedEvent(//fields ... );
} else {
AggregateLifecycle.apply(
new ContactEmailValidatedEvent(//fields ... );
);
}
}
}
After we do this check, we could then re-act appropriately to the ContactEmailInvalidatedEvent or ContactEmailValidatedEvent which might invalidate the contact afterwards.
The benefit of this approach is that it keeps the persistence local to the Aggregate, which could give better scaling (as more nodes are added, more aggregates with locally managed Sets exist).
Drawbacks
Quite a lot of boiler plate to replace "create unique index"
This approach allows an 'invalid' Contact to pollute the Event Store for ever
The 'Singleton Aggregate' is complex to ensure it is a true (perhaps there is a simpler or better way)
The 'invoker' of the CreateContactCommand must check to see the outcome of the Saga
What do others do to solve this? I feel option 2 is perhaps the simplest approach, but are there other options?
What you are essentially looking for is Set Based Validation (I think here blog does a nice job explaining the concept, and how to deal with it in Axon). In short, validating some field is (or is not) contained in a set of data. When doing CQRS, this becomes a somewhat interesting concept to reason about, with several solutions out there (as you've already portrayed).
I think the best solution to this is summarized under your second option to use a dedicated persistence layer for the email addresses. You'd simply create a very concise model containing just the email addresses, which you would validate prior to issuing the ContactCreateCommand. Note that this persistence layer belongs to the Command Model, as it is used to perform business validation. You'd thus introduce an example where you not only have Aggregates in your Command Model, but also Views. And as you've rightfully noted, this View needs to be optimized for it's use case of course. Maybe introducing a cache which is created on application start up wouldn't be to bad.
To ensure this email addresses view is as up to date as possible, it's smartest to ensure it is updated in the same transaction as when the ContactCreatedEvent (which contains a new email address, I assume) is published. You can do this by having a dedicated Event Handling Component for your "Email Addresses View" which is updated through a SubscribingEventProcessor (a SEP). This would work as the SEP is invoked by the same thread publishing the event (your aggregate).
You have a couple of options when it comes to querying this model prior to sending the command. You could use a MessageDispatchInterceptor which only reacts on the ContactCreateCommand for example. Or, you introduce a Handler Enhancer which is dedicated to react ContactCreateCommand to perform this validation. Or, you introduce another command like RequestContactCreationCommand which is targeted towards a regular component. This component would handle the command, validate the model and if approved dispatches a ContactCreateCommand.
That's my two cents to the situation, hope this helps #vcetinick!
I'm writing in C# for ASP.NET Web API 2. What I want is a catch-all method that will execute for every single request that comes to my Web API.
If the method returns null, then the original routing should continue, seeking out the correct method. However, if the method returns, say, an HTTPResponseMessage, the server should return that response and not proceed on to normal routing.
The use case would be the ability to handle various scenarios that may impact the entire API. For example: ban a single IP address, block (or whitelist) certain user agents, deal with API call counting (e.g. someone can only make X requests to any API method in Y minutes).
The only way I can imagine to do this right now is to literally include a method call in each and every new method I write for my API. For example,
[HttpGet]
public HttpResponseMessage myNewMethod()
{
// I want to avoid having to do this in every single method.
var check = methodThatEitherReturnsResponseOrNull(Request);
if (check != null) return (HttpResponseMessage)check;
// The method returned null so we go ahead with normal processing.
...
}
Is there some way to accomplish this in routing?
This is what Action Filters are for. These are Attributes that you can place either globally, at the class (Controller), or at the method (Action) levels. These attributes can do preprocessing where you execute some code before your action executes or post processing where you execute code after the action executes.
When using pre processing you have the option to return a result to the caller and not have your method (action) be fired at all. This is good for model validation, authorization checks, etc.
To register a filter globally edit the WebApiConfig.cs file.
public static class WebApiConfig
{
public static void Register(HttpConfiguration config)
{
config.Filters.Add(new YourFilterAttribute()); // add record
// rest of code
}
}
To create a custom attribute inherit from System.Web.Http.Filters.ActionFilterAttribute or you can implement interface System.Web.Http.Filters.IActionFilter or you can implement IAuthorizationFilter/AuthorizationFilterAttribute if you specifically want to allow/deny a request.
It also sounds like you want to create multiple attributes, one for each role like IP filtering or count calling etc. That way it would be more modular instead of one enormous authorization filter.
There are many tutorials out there like this one (chosen at random in my Google search results). I am not going to post code because you did not do so either so I would just be guessing as to what you wanted to do.
I am building a SaaS application and I would like to retain the single code base I have. I would like to be in separate sub-domains cust1.saascompany.com, cust2.saascompany.com, etc.
However, I don't have any TenantID's and would prefer for multiple reasons to stay with separate databases for each customer (primary one is that it's already coded that way and doesn't make sense to change it until usage warrants). The database has the user login membership within it.
I'm guessing I would need separate web.configs for connection strings? Or should I create a separate database that stores all the connection strings and any application level variables/constants? Eventually, I would like to be able to automate this provisioning (again, when usage warrants it).
Are there some articles or posts that anyone can point me to regarding how to set this up with steps? I haven't been able to find what I'm looking for.
Technically, this is simple. We do this for years. Although we use a different convention (my.domain.com/cust1, my.domain.com/cust2 plus url rewriting) this doesn't change anything.
So, this is what you do. You create an abstract specification of a connection string provider:
public interface ICustomerInformationProvider
{
string GetConnectionString( string CustomerId );
... // perhaps other information
}
then you provide any implementation you want like:
public class WebConfigCustomerInformationProvider : ICustomerInformationProvider { ... }
public class DatabaseConfigCustomerInformationProvider : ICustomerInformationProvider { ... }
public class XmlConfigCustomerInformationProvider : ICustomerInformationProvider { ... }
and you map your interface onto the implementation somehow (for example, using an IoC Container of your choice).
This gives you the chance to configure the provider during the deployment, for example, a one provider can be used by developers (reads connection strings from a file) and another one in the production environment (reads connection strings from a database which can be easily provisioned).
If you have other questions, feel free to ask.
I'm relatively new to utilizing web services. I'm trying to create one that will be accepting data from a ASP.Net form whose input controls are created dynamically at runtime and I don't how many control values will be getting passed.
I'm thinking I'll be using jQuery's serialize() on the form to get the data, but what do I have the web service accept for a parameter? I thought maybe I could use serializeArray(), but still I don't know what type of variable to accept for the JavaScript array.
Finally, I was thinking that I might need to create a simple data transfer object with the data before sending it along to the web service. I just didn't wanna go through with the DTO route if there was a much simpler way or an established best practice that I should follow.
Thanks in advance for any direction you can provide and let me know I wasn't clear enough, or if you have any questions.
The answer to the headline question (assuming this is an ASP.Net web service) is to use the params keyword in your web service method:
[WebMethod]
public void SendSomething(params string[] somethings)
{
foreach (string s in somethings)
{
// do whatever you're gonna do
}
}
Examples:
SendSomething("whatever");
SendSomething("whatever 1", "whatever 2", "whatever 3");
In fact, you don't really even need the params keyword - using an ordinary array as a parameter will let you pass in an unknown number of values.
Well I went with creating my own data transfer object which I guess was always the front of brain solution, I was just thinking that there was probably a recognized best practice on how to handle this.