Envoy routing based on gRPC method and paremeter - grpc

I have a Protobuf file Routing.proto used in a gRPC service. I also have an Envoy simple.yaml file.
I am trying to gRPC route match on method service_B_hello(1) where 1 is a camera_id value wrapped inside CopyImageRequest.
How can i do this route matching in Envoy with method parameter camera_id within the request ?
Routing.proto:
syntax = "proto3";
package routing;
message CopyImageRequest {
int32 camera_id = 1;
}
message CopyImageResponse {
string result = 1;
}
service RoutingService {
rpc service_A_hello(CopyImageRequest) returns (CopyImageResponse) {};
rpc service_B_hello(CopyImageRequest) returns (CopyImageResponse) {};
rpc service_C_hello(CopyImageRequest) returns (CopyImageResponse) {};
}
simple.yaml Envoy file:
routes:
- match:
prefix:"/routing.RoutingService/service_B_hello"
headers:
- name: "method"
exact_match: "service_B_hello"
- name: "camera_id"
regex_match: ^[0-3]$
- name: content-type
exact_match: application/grpc
grpc: {}
route:
auto_host_rewrite: true
cluster: grpc_stateless_service
TIA,

Related

oak on Deno - How do I build a URL to a route?

I come from a land of ASP.NET Core. Having fun learning a completely new stack.
I'm used to being able to:
name a route "orders"
give it a path like /customer-orders/{id}
register it
use the routing system to build a URL for my named route
An example of (4) might be to pass a routeName and then routeValues which is an object like { id = 193, x = "y" } and the routing system can figure out the URL /customer-orders/193?x=y - notice how it just appends extraneous key-vals as params.
Can I do something like this in oak on Deno?? Thanks.
Update: I am looking into some functions on the underlying regexp tool the routing system uses. It doesn't seem right that this often used feature should be so hard/undiscoverable/inaccessible.
https://github.com/pillarjs/path-to-regexp#compile-reverse-path-to-regexp
I'm not exactly sure what you mean by "building" a URL, but the URL associated to the incoming request is defined by the requesting client, and is available in each middleware callback function's context parameter at context.request.url as an instance of the URL class.
The documentation provides some examples of using a router and the middleware callback functions that are associated to routes in Oak.
Here's an example module which demonstrates accessing the URL-related data in a request:
so-74635313.ts:
import { Application, Router } from "https://deno.land/x/oak#v11.1.0/mod.ts";
const router = new Router({ prefix: "/customer-orders" });
router.get("/:id", async (ctx, next) => {
// An instance of the URL class:
const { url } = ctx.request;
// An instance of the URLSearchParams class:
const { searchParams } = url;
// A string:
const { id } = ctx.params;
const serializableObject = {
id,
// Iterate all the [key, value] entries and collect into an array:
searchParams: [...searchParams.entries()],
// A string representation of the full request URL:
url: url.href,
};
// Respond with the object as JSON data:
ctx.response.body = serializableObject;
ctx.response.type = "application/json";
// Log the object to the console:
console.log(serializableObject);
await next();
});
const app = new Application();
app.use(router.routes());
app.use(router.allowedMethods());
function printStartupMessage({ hostname, port, secure }: {
hostname: string;
port: number;
secure?: boolean;
}): void {
if (!hostname || hostname === "0.0.0.0") hostname = "localhost";
const address =
new URL(`http${secure ? "s" : ""}://${hostname}:${port}/`).href;
console.log(`Listening at ${address}`);
console.log("Use ctrl+c to stop");
}
app.addEventListener("listen", printStartupMessage);
await app.listen({ port: 8000 });
In a terminal shell (I'll call it shell A), the program is started:
% deno run --allow-net so-74635313.ts
Listening at http://localhost:8000/
Use ctrl+c to stop
Then, in another shell (I'll call it shell B), a network request is sent to the server at the route described in your question — and the response body (JSON text) is printed below the command:
% curl 'http://localhost:8000/customer-orders/193?x=y'
{"id":"193","searchParams":[["x","y"]],"url":"http://localhost:8000/customer-orders/193?x=y"}
Back in shell A, the output of the console.log statement can be seen:
{
id: "193",
searchParams: [ [ "x", "y" ] ],
url: "http://localhost:8000/customer-orders/193?x=y"
}
ctrl + c is used to send an interrupt signal (SIGINT) to the deno process and stop the server.
I am fortunately working with a React developer today!
Between us, we've found the .url(routeName, ...) method on the Router instance and that does exactly what I need!
Here's the help for it:
/** Generate a URL pathname for a named route, interpolating the optional
* params provided. Also accepts an optional set of options. */
Here's it in use in context:
export const routes = new Router()
.get(
"get-test",
"/test",
handleGetTest,
);
function handleGetTest(context: Context) {
console.log(`The URL for the test route is: ${routes.url("get-test")}`);
}
// The URL for the test route is: /test

How can I download a .zip file over a http to grpc transcoded endpoint?

I have the grpc endpoint defined as below. The grpc endpoint returns a .zip file. It works OK over a grpc channel, but I'm having issues downloading it over the REST endpoint.
I use envoy to do the http transcoding.
My problem right now, on the REST endpoint, is that the download response header is always set to application/json instead of application/zip for the zip download to properly work.
Any idea how I can instruct envoy to set the proper headers during transcoding, so that the REST download will properly work ?
// Download build
//
// Download build
rpc DownloadBuild(DownloadBuildRequest) returns (stream DownloadResponse) {
option (google.api.http) = {
get : "/v4/projects/{projectId}/types/{buildType}/builds/{buildVersion}/.download"
headers: {}
};
option (grpc.gateway.protoc_gen_swagger.options.openapiv2_operation) = {
description: "Download build.";
summary: "Download build.";
tags: "Builds";
produces: "application/zip";
responses: {
key: "200"
value: {
description: "Download build";
}
}
responses: {
key: "401"
value: {
description: "Request could not be authorized";
}
}
responses: {
key: "404"
value: {
description: "Build not found";
}
}
responses: {
key: "500"
value: {
description: "Internal server error";
}
}
};
}
Ok I've found a way to partially solve my problem. Basically I will have to use google.api.HttpBody as response and set the content type in there.
https://github.com/googleapis/googleapis/blob/master/google/api/httpbody.proto
https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/grpc_json_transcoder_filter#sending-arbitrary-content
I still have a problem with setting the downloaded file name and extension header.
To set the filename header, I had to define a server interceptor, similar to what is mentioned here:
How to pass data from grpc rpc call to server interceptor in java
private class TrailerCall<ReqT, RespT> extends SimpleForwardingServerCall<ReqT, RespT> {
public TrailerCall(final ServerCall<ReqT, RespT> delegate) {
super(delegate);
}
#Override
public void sendHeaders(Metadata headers) {
headers.merge(RESPONSE_HEADERS_HOLDER_KEY.get());
super.sendHeaders(headers);
}
}

Unable to call watson language translator using proxy

when I called watson language translator service over a public network its responded with no error. meanwhile, its not able to get response body over my private network
I am using the NGINX has my load balancer and have configure a proxy_http for it on the configuration.
The error is
{ Error: Response not received. Body of error is HTTP ClientRequest object
at formatError (root\node_modules\ibm-cloud-sdk-core\lib\requestwrapper.js:115:17)
at D:\Rafiki Project\production build\Rafiki Production Files 1\ecobot-orchestrator-master_23_9-orch_persistency_fixes\node_modules\ibm-cloud-sdk-core\lib\requestwrapper.js:265:19
at process._tickCallback (internal/process/next_tick.js:68:7)
var languageTranslator = new LanguageTranslatorV2({
username:'8******************',
password:'*************',
url: 'https://gateway.watsonplatform.net/language-translator/api/',
version: '2017-05-26'
});
function translateToWSPLan(req, res, callback){
console.log("the request for translation is::");
console.log(JSON.stringify(req));
console.log("======================");
languageTranslator.identify(req.body.identifyParams, function(err, data){
if(err){
console.log('=================error==========');
console.log(err);
console.log('=================================');
var errorLog = {name: err.name, message: err.message};
callback(errorLog);
}else {
}
})
See this issue raised on the Node.js SDK for Watson - https://github.com/watson-developer-cloud/node-sdk/issues/900#issuecomment-509257669
To enable proxy routing, add proxy settings to the constructor
var languageTranslator = new LanguageTranslatorV2({
username:'8******************',
password:'*************',
url: 'https://gateway.watsonplatform.net/language-translator/api/',
version: '2017-05-26',
// other params...
proxy: '<some proxy config>',
httpsAgent: '<or some https agent config>'
});
If you take a look at the issue, then there is a problem with accessing IAM tokens which does not work when there is a proxy, but as you appear to be using a userid / password combination, you should be OK. That is until cloud boundary style credentials are suspended and superseded by IAM credentials for all existing Watson services.

How to enable only part of the api for Sockend on cote?

In documentation it is shown that when user creates socket connection, it can create namespace:
let socketNamespaced = io.connect('/rnd');
for sockend.js server initialization there is no mention of namespace.
const sockend = new cote.Sockend(io, {
name: 'Sockend',
// key: 'a certain key'
});
From what I understand, client chooses to which namespace to connect. Now, to avoid security issues, is there a way to enforce socket namespace on server side.
For example
const sockend = new cote.Sockend(io, {
name: 'Sockend',
namespace: '/cmd'
});
That way, only this namespace will be exposed to socket and there would be no chance of changing the client namespace and opening up the entire api to sockend.
You define the namespace in the Responder. Using the property respondsTo you then define what types are exposed publicly by Sockend:
var cmdResponder = new cote.Responder({
name: 'CMD Responder',
namespace: 'cmd',
respondsTo: [ 'hello' ]
})
cmdResponder.on('hello', async(req) => { return 'hi' })
Without respondsTo set in Responders the Sockend exposes no types by itself.
In this example, the /cmd namespace would only answer to 'hello'.

http: body field path 'foo' must be a non-repeated message.'

I'm trying to setup transcoding HTTP/JSON to gRPC with the gRPC helloworld example on gcloud endpoints. My annotation to the helloworld.proto file is:
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {
option (google.api.http) = {
post: "/v1/hello"
body: "name"
};
}
}
and my service config is:
http:
rules:
- selector: helloworld.Greeter.SayHello
post: /v1/hello
body: name
After generating the api_descriptor.pb file, I execute:
gcloud endpoints services deploy api_descriptor.pb api_config.yaml
and I get:
ERROR: (gcloud.endpoints.services.deploy) INVALID_ARGUMENT: Cannot convert to service config.
'ERROR: helloworld/helloworld.proto:43:3: http: body field path 'foo' must be a non-repeated message.'
Any help would be greatly appreciated. :)
Apparently, the body cannot be a base type. Wrapping the name in a message seems to work:
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {
option (google.api.http) = {
post: "/v1/hello"
body: "person"
};
}
}
message Person {
string name = 1;
}
// The request message containing the user's name.
message HelloRequest {
Person person = 1;
}

Resources