I have a GRPC API where, following a refactor, a few packages were renamed. This includes the package declaration in one of our proto files that defines the API. Something like this:
package foo;
service BazApi {
rpc FooEventStream(stream Ack) returns (stream FooEvent);
}
which was changed to
package bar;
service BazApi {
rpc FooEventStream(stream Ack) returns (stream FooEvent);
}
The server side is implemented using grpc-java with scala and monix on top.
This all works fine for clients that use the new proto files, but for old clients that were built on top of the old proto files, this causes problems: UNIMPLEMENTED: Method not found: foo.BazApi/FooEventStream.
The actual data format of the messages passed over the GRPC API has not changed, only the package.
Since we need to keep backwards compatibility, I've been looking into a way to make the old clients work while keeping the name change.
I was hoping to make this work with a generic ServerInterceptor which would be able to inspect an incoming call, see that it's from an old client (we have the client version in the headers) and redirect/forward it to the renamed service. (Since it's just the package name that changed, this is easy to figure out e.g. foo.BazApi/FooEventStream -> bar.BazApi/FooEventStream)
However, there doesn't seem to be an elegant way to do this. I think it's possible by starting a new ClientCall to the correct endpoint, and then handling the ServerCall within the interceptor by delegating to the ClientCall, but that will require a bunch of plumbing code to properly handle unary/clientStreaming/serverStreaming/bidiStreaming calls.
Is there a better way to do this?
If you can easily change the server, you can have it support both names simultaneously. You can consider a solution where you register your service twice, with two different descriptors.
Every service has a bindService() method that returns a ServerServiceDefinition. You can pass the definition to the server via the normal serverBuilder.addService().
So you could get the normal ServerServiceDefinition and then rewrite it to the new name and then register the new name.
BazApiImpl service = new BazApiImpl();
serverBuilder.addService(service); // register "bar"
ServerServiceDefinition barDef = service.bindService();
ServerServiceDefinition fooDefBuilder = ServerServiceDefinition.builder("foo.BazApi");
for (ServerMethodDefinition<?,?> barMethodDef : barDef.getMethods()) {
MethodDescriptor desc = barMethodDef.getMethodDescriptor();
String newName = desc.getFullMethodName().replace("foo.BazApi/", "bar.BazApi/");
desc = desc.toBuilder().setFullMethodName(newName).build();
foDefBuilder.addMethod(desc, barMethodDef.getServerCallHandler());
}
serverBuilder.addService(fooDefBuilder.build()); // register "foo"
Using the lower-level "channel" API you can make a proxy without too much work. You mainly just proxy events from a ServerCall.Listener to a ClientCall and the ClientCall.Listener to a ServerCall. You get to learn about the lower-level MethodDescriptor and the rarely-used HandlerRegistry. There's also some complexity to handle flow control (isReady() and request()).
I made an example a while back, but never spent the time to merge it to grpc-java itself. It is currently available on my random branch. You should be able to get it working just by changing localhost:8980 and by re-writing the MethodDescriptor passed to channel.newCall(...). Something akin to:
MethodDescriptor desc = serverCall.getMethodDescriptor();
if (desc.getFullMethodName().startsWith("foo.BazApi/")) {
String newName = desc.getFullMethodName().replace("foo.BazApi/", "bar.BazApi/");
desc = desc.toBuilder().setFullMethodName(newName).build();
}
ClientCall<ReqT, RespT> clientCall
= channel.newCall(desc, CallOptions.DEFAULT);
Related
I'm putting an object into the session, and then in a latter step in the scenario I need to use properties of that object in an http request.
The Gatling expression language does not support accessing properties of an object, so I thought I could extract the object from the session manually and then extract the properties I needed in the http request using the following code.
exec(session => {
val project = session("item").as[Project]
println(s"name = ${project.getName}, daysToComplete = ${project.getDaysToComplete}")
http("Health Check")
.get(s"/health")
.queryParam("name", s"${project.getName}")
session
})
But structured this way the http request is not added into the chain and so does not execute.
Is there anyway to do this, short of putting the individual properties into the session. This is a simplified example. The object I'm putting into the session is much more complicated than this.
Already answered on Gatling's official mailing list.
This cannot work, please read the documentation: https://gatling.io/docs/gatling/reference/current/general/scenario/#exec
Gatling DSL components are immutable ActionBuilder(s) that have to be chained altogether and are only built once on startup. The result is a workflow chain of Action(s). These builders don’t do anything by themselves, they don’t trigger any side effect, they are just definitions. As a result, creating such DSL components at runtime in functions is completely meaningless. If you want conditional paths in your execution flow, use the proper DSL components (doIf, randomSwitch, etc)
exec { session =>
if (someSessionBasedCondition(session)) {
// just create a builder that is immediately discarded, hence doesn't do anything
// you should be using a doIf here
http("Get Homepage").get("http://github.com/gatling/gatling")
}
session
}
You should do something like:
foreach(components, "component") {
exec(
http { session =>
val component = session("component").as[ITestComponent]
s"Upload Component ${component.getId}"
}.post { session =>
val component = session("component").as[ITestComponent]
s"/component/$repoId/$assetId/${component.getId}/${component.getResourceVersionId}"
}
.bodyPart(RawFileBodyPart("resource", session => {
val component = session("component").as[ITestComponent]
component.getContent.getAbsolutePath()).contentType(component.getMediaType()).fileName(component.getContent.getName())).asMultipartForm
}
)
}
Yes, this is pretty complicated. The reason it looks so over bloated is because you're trying to use a Java POJO (hidden behind an interface), instead of using Scala case classes.
If you were to use a Scala case class, you could use Gatling Expression Language (it doesn't support accessing POJOs by reflection atm) and do something like this:
foreach(components, "component") {
exec(
http("Upload Component ${component.id}")
.post(s"/component/$repoId/$assetId/$${component.id}/$${component.resourceVersionId}")
.bodyPart(
RawFileBodyPart("resource", "${component.content.absolutePath}")
.contentType("${component.content.mediaType}")
.fileName("${component.content.name}")
).asMultipartForm
)
}
We are using MediatR to implement a "Pipeline" for our dotnet core WebAPI backend, trying to follow the CQRS principle.
I can't decide if I should try to implement a IPipelineBehavior chain, or if it is better to construct a new Request and call MediatR.Send from within my Handler method (for the request).
The scenario is essentially this:
User requests an action to be executed, i.e. Delete something
We have to check if that something is being used by someone else
We have to mark that something as deleted in the database
We have to actually delete the files from the file system.
Option 1 is what we have now: A DeleteRequest which is handled by one class, wherein the Handler checks if it is being used, marks it as deleted, and then sends a new TaskStartRequest with the parameters to Delete.
Option 2 is what I'm considering: A DeleteRequest which implements the marker interfaces IRequireCheck, IStartTask, with a pipeline which runs:
IPipelineBehavior<IRequireCheck> first to check if the something is being used,
IPipelineBehavior<DeleteRequest> to mark the something as deleted in database and
IPipelineBehavior<IStartTask> to start the Task.
I haven't fully figured out what Option 2 would look like, but this is the general idea.
I guess I'm mainly wondering if it is code smell to call MediatR.Send(TRequest2) within a Handler for a TRequest1.
If those are the options you're set on going with - I say Option 2. Sending requests from inside existing Mediatr handlers can be seen as a code smell. You're hiding side effects and breaking the Single Responsibility Principle. You're also coupling your requests together and you should try to avoid situations where you can't send one type of request before another.
However, I think there might be an alternative. If a delete request can't happen without the validation and marking beforehand you may be able to leverage a preprocessor (example here) for your TaskStartRequest. That way you can have a single request that does everything you need. This even mirrors your pipeline example by simply leveraging the existing Mediatr patterns.
Is there any need to break the tasks into multiple Handlers? Maybe I am missing the point in mediatr. Wouldn't this suffice?
public async Task<Result<IFailure,ISuccess>> Handle(DeleteRequest request)
{
var thing = await this.repo.GetById(request.Id);
if (thing.IsBeignUsed())
{
return Failure.BeignUsed();
}
var deleted = await this.repo.Delete(request.Id);
return deleted ? new Success(request.Id) : Failure.DbError();
}
Lets say we have a message containing ID of some record in the database
message Record {
uint64 id = 1;
}
We also have an rpc call that returns all of the rows from table DATA that said record is mentioned in.
rpc GetDataForRecord(Record) returns (Data) {}
If we, for example, wrap Record in
RqData{
Record id = 1;
}
then once we need to only return, for example, "active" data, we won't need to make
GetActiveDataForRecord
instead we could extend RqData as:
RqData{
Record id = 1;
bool use_active = 2;
}
and use
rpc GetDataForRecord(RqData) returns (Data) {}
and clients that know of this new functionality will be able to call it, while older clients will just use it as it was passing only Record part within the Rq wrapper, without specifying active or not.
Here's the question: is there really a reason to use this kind of wrapping of everything into a separate request, or am I overthinking things and just passing plain structures will do?
I am kinda trying to think about the future, but not sure if I am not overcomplicating things.
In general, making a method-specific request and response is a Good Thing™ and is encouraged. For a Foo method you'd have FooRequest and FooResponse. Having specialized messages for the method allows you to add new "arguments," as you mentioned.
But for some cases it turns out fine to break the pattern and avoid the wrapping; it's a judgement call. Although you're asking from a different perspective, you may be interested in this answer about related methods.
I make my signalrR connection in JavaScript like so:
$.connection.hub.start().done(function () {
//do something
});
In using WebSockets I can set the BinaryType property like so:
var wsUri = "ws://localhost:8080/whiteboard/websocket";
var websocket = new WebSocket(wsUri);
websocket.binaryType = "blob";
or
websocket.binaryType = "arraybuffer";
can i set this property for SignalR and if so how?
No you can't, at least up to the latest officially released version (2.x). SignalR is a set of layered abstactions over a physical connection, where all those abstractions must work with different transport strategies (WebSockets being just one of them), therefore they must represent a common subset of features.
Anything related to the required type of data transmission/serialization is handled by SignalR automatically, except for certain portions when using a PersistentConnection, where only strings can be used. There is some space for changing some behaviors by injecting custom implementations, but I think this one would be very hard to do and probably conflicting with SignalR's general goals.
I need to send a HTTP request (and get XML response) from Flash that looks similar to following:
http://example.com/somepath?data=1&data=2&data=3
I.e. having several parameters that share same name, but have different values.
Until now I used following code to make HTTP requests:
var resp:XML = new XML();
resp.onLoad = function(success:Boolean) {/*...*/};
resp.ignoreWhite = true;
var req:LoadVars = new LoadVars();
req["someParam1"] = 3;
req["someParam2"] = 12;
req.sendAndLoad("http://example.com/somepath", resp, "GET");
In this case this will not do: there will be only one parameter having last value.
What are my options? I'm using actionscript 2.
Added
I guess, I can do something like that:
var url:String = myCustomFunctionForBuildingRequestString();
var resp:XML = new XML();
resp.onLoad = function(success:Boolean) {/*...*/};
resp.load(url);
But in that case I am loosing ability to do POST requests. Any alternatives?
Changing request is not appropriate.
The standard http way of sending array data is
http://example.com/?data[0]=1&data[1]=2
But this isn't wrong either (added from comment):
http://example.com/?data[]=1&data[]=2
Sending more parameters with the same name like you're doing, in practice means that all but the last item should be ignored. This is because when reading variables, the server overwrites (in memory) any item that has the same name as that one, because renaming a variable isn't good practice and never was.
I don't know much AS (none :p) but you'd access it as a list or array or whatever data structures it has.
Although POST may be having multiple values for the same key, I'd be cautious using it, since some servers can't even properly handle that, which is probably why this isn't supported ... if you convert "duplicate" parameters to a list, the whole thing might start to choke, if a parameter comes in only once, and suddendly you wind up having a string or something ... but i guess you know what you're doing ...
I am sorry to say so, but what you want to do, is not possible in pure AS2 ... the only 2 classes available for HTTP are LoadVars and XML ... technically there's also loadVariables, but it will simply copy properties from the passed object into the request, which doesn't change your problem, since properties are unique ...
if you want to stick to AS2, you need an intermediary tier:
a server to forward your calls. if you have access to the server, then you create a new endpoint for AS2 clients, which will decode the requests and pass them to the normal endpoint.
use javascript. with flash.external::ExternalInterface you can call JavaScript code. You need to define a callback for when the operation is done, as well as a JavaScript function that you can call (there are other ways but this should suffice). Build the request string inside flash, pump it to JavaScript and let JavaScript send it to the server in a POST request and get the response back to flash through the callback.
up to you to decide which one is more work ...
side note: in AS3, you'd use flash.net::URLLoader with dataFormat set to flash.net::URLLoaderDataFormat.TEXT, and then again encode parameters to a string, and send them.
Disclaimer; I've never used Actionscript and have no means for testing this.
Putting the same variable name with several values on the query string is the standard way of sending multi-value variables (for example form checkboxes) to web servers. If LoadVars is capable of sending multiple values then it seems plausible that the values should be stored in an array:
req["someParam1"] = ["foo","bar","bas"];
There also seems to be a decode function to LoadVars, what happens if you try to import the query string you want into the object?:
req.decode("someParam1=foo&someParam1=bar&someParam1=bas");
You cannot use loadvars like this - because data can be either 1 or 2 or 3, not all of them at the same time.
You can either pass it as a comma separated list:
var req:LoadVars = new LoadVars();
req["data"] = "1,2,3";
or as an xml string, and parse it at the server. I am not familiar with manipulating xml in AS2, but this is how you'd do it in AS3:
var xml:XML = <root/>;
xml.appendChild(<data>1</data>);
xml.appendChild(<data>2</data>);
xml.appendChild(<data>3</data>);
//now pass it to loadvars
req["data"] = xml.toXMLString();
The string you send is:
<root>
<data>1</data>
<data>2</data>
<data>3</data>
</root>