Whenever I create an actor that needs some asynchronously obtained data to initialize itself, I find myself using an idiom like this. Does it have a name? (And is it the best way to do it?)
class AsyncInitActor(db: Database, someId: UUID) extends Actor with ActorStash {
case class Initialize(something: Something)
override def preStart() = {
db.getSomething(someId) onSuccess { something =>
self ! Initialize(something)
}
}
def receive = {
case Initialize(something) =>
context become initialized(something)
unstashAll()
case _ => stash()
}
def initialized(something): Receive = {
case whatever =>
}
}
In the case of actors created by cluster sharding, the asynchronous request happens in receive instead of preStart.
To be clear, I am not looking for a GoF pattern.
I have never used stash for this. If I really need to initialize, I'm more likely to use FSM to ensure that my actor is in a ready state.
http://doc.akka.io/docs/akka/current/scala/fsm.html
I suppose you could combine both approaches.
The closest design pattern that comes to my mind is Proxy.
Per the documentation here:
Design a surrogate, or proxy, object that: instantiates the real object the first time the client makes a request of the proxy, remembers the identity of this real object, and forwards the instigating request to this real object. Then all subsequent requests are simply forwarded directly to the encapsulated real object.
In your case, you can think of the actor being wrapped by a proxy. Once the actor is initialized from database, the proxy forwards all requests to it, i.e. the real actor. Until then, the proxy returns null, default actor or something.
Whenever I create an actor that needs some asynchronously obtained
data to initialize itself...
I would say, you basically described a Factory pattern. And if your actors are DDD concepts then it would be in the context of DDD Repository. In other words, you are dealing with a complex creation of an entity along with its validation - the need part. One of the responsibilities of a DDD repository is to yield a valid aggregate root.
My two cents.
Sergiy
<><
Related
I'm putting an object into the session, and then in a latter step in the scenario I need to use properties of that object in an http request.
The Gatling expression language does not support accessing properties of an object, so I thought I could extract the object from the session manually and then extract the properties I needed in the http request using the following code.
exec(session => {
val project = session("item").as[Project]
println(s"name = ${project.getName}, daysToComplete = ${project.getDaysToComplete}")
http("Health Check")
.get(s"/health")
.queryParam("name", s"${project.getName}")
session
})
But structured this way the http request is not added into the chain and so does not execute.
Is there anyway to do this, short of putting the individual properties into the session. This is a simplified example. The object I'm putting into the session is much more complicated than this.
Already answered on Gatling's official mailing list.
This cannot work, please read the documentation: https://gatling.io/docs/gatling/reference/current/general/scenario/#exec
Gatling DSL components are immutable ActionBuilder(s) that have to be chained altogether and are only built once on startup. The result is a workflow chain of Action(s). These builders don’t do anything by themselves, they don’t trigger any side effect, they are just definitions. As a result, creating such DSL components at runtime in functions is completely meaningless. If you want conditional paths in your execution flow, use the proper DSL components (doIf, randomSwitch, etc)
exec { session =>
if (someSessionBasedCondition(session)) {
// just create a builder that is immediately discarded, hence doesn't do anything
// you should be using a doIf here
http("Get Homepage").get("http://github.com/gatling/gatling")
}
session
}
You should do something like:
foreach(components, "component") {
exec(
http { session =>
val component = session("component").as[ITestComponent]
s"Upload Component ${component.getId}"
}.post { session =>
val component = session("component").as[ITestComponent]
s"/component/$repoId/$assetId/${component.getId}/${component.getResourceVersionId}"
}
.bodyPart(RawFileBodyPart("resource", session => {
val component = session("component").as[ITestComponent]
component.getContent.getAbsolutePath()).contentType(component.getMediaType()).fileName(component.getContent.getName())).asMultipartForm
}
)
}
Yes, this is pretty complicated. The reason it looks so over bloated is because you're trying to use a Java POJO (hidden behind an interface), instead of using Scala case classes.
If you were to use a Scala case class, you could use Gatling Expression Language (it doesn't support accessing POJOs by reflection atm) and do something like this:
foreach(components, "component") {
exec(
http("Upload Component ${component.id}")
.post(s"/component/$repoId/$assetId/$${component.id}/$${component.resourceVersionId}")
.bodyPart(
RawFileBodyPart("resource", "${component.content.absolutePath}")
.contentType("${component.content.mediaType}")
.fileName("${component.content.name}")
).asMultipartForm
)
}
gRPC newbie. I have a simple api:
Customer getCustomer(int id)
List<Customer> getCustomers()
So my proto looks like this:
message ListCustomersResponse {
repeated Customer customer = 1;
}
rpc ListCustomers (google.protobuf.Empty) returns (ListCustomersResponse);
rpc GetCustomer (GetCustomerRequest) returns (Customer);
I was trying to follow Googles lead on the style. Originally I had returns (stream Customer) for GetCustomers, but Google seems to favor the ListxxxResponse style. When I generate the code, it ends up being:
public void getCustomers(com.google.protobuf.Empty request,
StreamObserver<ListCustomersResponse> responseObserver) {
vs:
public void getCustomers(com.google.protobuf.Empty request,
StreamObserver<Customer> responseObserver) {
Am I missing something? Why would I want to go through the hassle of creating a ListCustomersResponse when I can just do stream Customer and get the streaming functionality?
The ListCustomersResponse is just streaming the whole list at once vs streaming each customer. Googles preference seems to be to return the ListCustomersResponse style all of the time.
When is it appropriate to use the ListxxxResponse vs the stream response?
This question is hard to answer without knowing what reference you're using. It's possible there's a miscommunication, or that the reference is simply wrong.
If you're looking at the gRPC Basics tutorial though, then I might have an inkling as to what caused a miscommunication. If that's indeed your reference, then it does not recommend returning repeated fields for streamed responses; your intuition is correct: you would just want to stream the singular Customer.
Here is what it says (screenshot intentional):
You might be reading rpc ListFeatures(Rectangle) as meaning an endpoint that returns a list [noun] of features. If so, that's a miscommunication. The guide actually means an endpoint to list [verb] features. It would have been less confusing if they just wrote rpc GetFeatures(Rectangle).
So, your proto should look more like this,
rpc GetCustomers (google.protobuf.Empty) returns (stream Customer);
rpc GetCustomer (GetCustomerRequest) returns (Customer);
generating exactly what you suspected made more sense.
Update
Ah I see, so you're looking at this example in googleapis:
// Lists shelves. The order is unspecified but deterministic. Newly created
// shelves will not necessarily be added to the end of this list.
rpc ListShelves(ListShelvesRequest) returns (ListShelvesResponse) {
option (google.api.http) = {
get: "/v1/shelves"
};
}
...
// Response message for LibraryService.ListShelves.
message ListShelvesResponse {
// The list of shelves.
repeated Shelf shelves = 1;
// A token to retrieve next page of results.
// Pass this value in the
// [ListShelvesRequest.page_token][google.example.library.v1.ListShelvesRequest.page_token]
// field in the subsequent call to `ListShelves` method to retrieve the next
// page of results.
string next_page_token = 2;
}
Yeah, I think you've probably figured the same by now, but here they have chosen to use a simple RPC, as opposed to a server-side streaming RPC (see here). I emphasize this because, I think the important choice is not the stylistic difference between repeated versus stream, but rather the difference between a simple request-response API versus a more complex and less-ubiquitous streaming API.
In the googleapis example above, they're defining an API that returns a fixed and static number of items per page, e.g. 10 or 50. It would simply be overcomplicated to use streaming for this, when pagination is already so well-understood and prevalent in software architecture and REST APIs. I think that is what they should have said, rather than "a small number." So the complexity of streaming (and learning cost to you and future maintainers) has to justified, that's all. Suppose you're actually fetching thousands of (x, y, z) items for a Point Cloud or you're creating a live-updating bid-ask visualizer for some cryptocurrency, e.g.
Then you'd start asking yourself, "Is a simple request-response API my best option here?" So it just tends to be that, the larger the number of items needing to be returned, the more streaming APIs start to make sense. And that can be for conceptual reasons, e.g. the items are a live-updating stream in time like the above crypto example, or architectural, e.g. it would be more efficient to start displaying results in the UI as partial data streams back. I think the "small number" thing you read was an oversimplification.
We are using MediatR to implement a "Pipeline" for our dotnet core WebAPI backend, trying to follow the CQRS principle.
I can't decide if I should try to implement a IPipelineBehavior chain, or if it is better to construct a new Request and call MediatR.Send from within my Handler method (for the request).
The scenario is essentially this:
User requests an action to be executed, i.e. Delete something
We have to check if that something is being used by someone else
We have to mark that something as deleted in the database
We have to actually delete the files from the file system.
Option 1 is what we have now: A DeleteRequest which is handled by one class, wherein the Handler checks if it is being used, marks it as deleted, and then sends a new TaskStartRequest with the parameters to Delete.
Option 2 is what I'm considering: A DeleteRequest which implements the marker interfaces IRequireCheck, IStartTask, with a pipeline which runs:
IPipelineBehavior<IRequireCheck> first to check if the something is being used,
IPipelineBehavior<DeleteRequest> to mark the something as deleted in database and
IPipelineBehavior<IStartTask> to start the Task.
I haven't fully figured out what Option 2 would look like, but this is the general idea.
I guess I'm mainly wondering if it is code smell to call MediatR.Send(TRequest2) within a Handler for a TRequest1.
If those are the options you're set on going with - I say Option 2. Sending requests from inside existing Mediatr handlers can be seen as a code smell. You're hiding side effects and breaking the Single Responsibility Principle. You're also coupling your requests together and you should try to avoid situations where you can't send one type of request before another.
However, I think there might be an alternative. If a delete request can't happen without the validation and marking beforehand you may be able to leverage a preprocessor (example here) for your TaskStartRequest. That way you can have a single request that does everything you need. This even mirrors your pipeline example by simply leveraging the existing Mediatr patterns.
Is there any need to break the tasks into multiple Handlers? Maybe I am missing the point in mediatr. Wouldn't this suffice?
public async Task<Result<IFailure,ISuccess>> Handle(DeleteRequest request)
{
var thing = await this.repo.GetById(request.Id);
if (thing.IsBeignUsed())
{
return Failure.BeignUsed();
}
var deleted = await this.repo.Delete(request.Id);
return deleted ? new Success(request.Id) : Failure.DbError();
}
Lets say we have a message containing ID of some record in the database
message Record {
uint64 id = 1;
}
We also have an rpc call that returns all of the rows from table DATA that said record is mentioned in.
rpc GetDataForRecord(Record) returns (Data) {}
If we, for example, wrap Record in
RqData{
Record id = 1;
}
then once we need to only return, for example, "active" data, we won't need to make
GetActiveDataForRecord
instead we could extend RqData as:
RqData{
Record id = 1;
bool use_active = 2;
}
and use
rpc GetDataForRecord(RqData) returns (Data) {}
and clients that know of this new functionality will be able to call it, while older clients will just use it as it was passing only Record part within the Rq wrapper, without specifying active or not.
Here's the question: is there really a reason to use this kind of wrapping of everything into a separate request, or am I overthinking things and just passing plain structures will do?
I am kinda trying to think about the future, but not sure if I am not overcomplicating things.
In general, making a method-specific request and response is a Good Thing™ and is encouraged. For a Foo method you'd have FooRequest and FooResponse. Having specialized messages for the method allows you to add new "arguments," as you mentioned.
But for some cases it turns out fine to break the pattern and avoid the wrapping; it's a judgement call. Although you're asking from a different perspective, you may be interested in this answer about related methods.
I've been utilizing the command pattern in my Flex projects, with asynchronous callback routes required between:
whoever instantiated a given command object and the command object,
the command object and the "data access" object (i.e. someone who handles the remote procedure calls over the network to the servers) that the command object calls.
Each of these two callback routes has to be able to be a one-to-one relationship. This is due to the fact that I might have several instances of a given command class running the exact same job at the same time but with slightly different parameters, and I don't want their callbacks getting mixed up. Using events, the default way of handling asynchronicity in AS3, is thus pretty much out since they're inherently based on one-to-many relationships.
Currently I have done this using callback function references with specific kinds of signatures, but I was wondering if someone knew of a better (or an alternative) way?
Here's an example to illustrate my current method:
I might have a view object that spawns a DeleteObjectCommand instance due to some user action, passing references to two of its own private member functions (one for success, one for failure: let's say "deleteObjectSuccessHandler()" and "deleteObjectFailureHandler()" in this example) as callback function references to the command class's constructor.
Then the command object would repeat this pattern with its connection to the "data access" object.
When the RPC over the network has successfully been completed (or has failed), the appropriate callback functions are called, first by the "data access" object and then the command object, so that finally the view object that instantiated the operation in the first place gets notified by having its deleteObjectSuccessHandler() or deleteObjectFailureHandler() called.
I'll try one more idea:
Have your Data Access Object return their own AsyncTokens (or some other objects that encapsulate a pending call), instead of the AsyncToken that comes from the RPC call. So, in the DAO it would look something like this (this is very sketchy code):
public function deleteThing( id : String ) : DeferredResponse {
var deferredResponse : DeferredResponse = new DeferredResponse();
var asyncToken : AsyncToken = theRemoteObject.deleteThing(id);
var result : Function = function( o : Object ) : void {
deferredResponse.notifyResultListeners(o);
}
var fault : Function = function( o : Object ) : void {
deferredResponse.notifyFaultListeners(o);
}
asyncToken.addResponder(new ClosureResponder(result, fault));
return localAsyncToken;
}
The DeferredResponse and ClosureResponder classes don't exist, of course. Instead of inventing your own you could use AsyncToken instead of DeferredResponse, but the public version of AsyncToken doesn't seem to have any way of triggering the responders, so you would probably have to subclass it anyway. ClosureResponder is just an implementation of IResponder that can call a function on success or failure.
Anyway, the way the code above does it's business is that it calls an RPC service, creates an object encapsulating the pending call, returns that object, and then when the RPC returns, one of the closures result or fault gets called, and since they still have references to the scope as it was when the RPC call was made, they can trigger the methods on the pending call/deferred response.
In the command it would look something like this:
public function execute( ) : void {
var deferredResponse : DeferredResponse = dao.deleteThing("3");
deferredResponse.addEventListener(ResultEvent.RESULT, onResult);
deferredResponse.addEventListener(FaultEvent.FAULT, onFault);
}
or, you could repeat the pattern, having the execute method return a deferred response of its own that would get triggered when the deferred response that the command gets from the DAO is triggered.
But. I don't think this is particularly pretty. You could probably do something nicer, less complex and less entangled by using one of the many application frameworks that exist to solve more or less exactly this kind of problem. My suggestion would be Mate.
Many of the Flex RPC classes, like RemoteObject, HTTPService, etc. return AsyncTokens when you call them. It sounds like this is what you're after. Basically the AsyncToken encapsulates the pending call, making it possible to register callbacks (in the form of IResponder instances) to a specific call.
In the case of HTTPService, when you call send() an AsyncToken is returned, and you can use this object to track the specific call, unlike the ResultEvent.RESULT, which gets triggered regardless of which call it is (and calls can easily come in in a different order than they were sent).
The AbstractCollection is the best way to deal with Persistent Objects in Flex / AIR. The GenericDAO provides the answer.
DAO is the Object which manages to perform CRUD Operation and other Common
Operations to be done over a ValueObject ( known as Pojo in Java ).
GenericDAO is a reusable DAO class which can be used generically.
Goal:
In JAVA IBM GenericDAO, to add a new DAO, the steps to be done is simply,
Add a valueobject (pojo).
Add a hbm.xml mapping file for the valueobject.
Add the 10-line Spring configuration file for the DAO.
Similarly, in AS3 Project Swiz DAO. We want to attain a similar feet of achievement.
Client Side GenericDAO model:
As we were working on a Client Side language, also we should be managing a persistent object Collection (for every valueObject) .
Usage:
Source:
http://github.com/nsdevaraj/SwizDAO