I am trying to implement the gRPC file upload service, where i will accept bytes as input.
But want to know how can i test it using bloomRPC. I can test normal JSON gRPC services but dont see any option to test file upload / stream services in the same.
Want to know if there any way to test it from boomRPC/other tool and not using java client.
Related
We have a microservice that needs to be integration tested (real calls, but no network communication with anything outside of the test namespace in kubernetes) in our pipeline. It also relies on an external gRPC server which we have no control over.
Above is a picture of what we'd like to have happen. The white box on the left is code that provides the Microservice Boundary with 'external' data. It then keeps calling the Code via REST until it gets back the proper number of records or it times out. The Code pulls records from an internal database, as well as data associated to those records from a gRPC call. Since we do not own the gRPC service, but are doing integration tests, we need a few pre-defined responses to the two gRPC services we call (blue box).
Since our integration tests are self-contained right now, and we don't want to write an entirely new actual gRPC server implementation just to mimick calls, is there a way to stand up a real gRPC server and configure it to return responses? The request is pretty much like a mock setup, except with an actual server.
We need to be able to:
give the server multiple proto files to interpret and have it expose those as endpoints. Proto files must be able to have different package names
using files we can store in source control, configure the responses to each call
able to run in a linux docker container (no windows)
I did find gripmock which seemed almost exactly what we need, but it only serves one proto file per container. It supposedly can serve more than one, but I can't get it to work and their example that serves two files implies each proto file must have the same package name which will likely never happen with our scenarios. In the meantime we are using it, but if we have 10 gRPC call dependencies, we now have to run 10 gripmock servers.
Wikipedia contains a list of API mocking tools. Looking at that list today there is a commercial tool that supports gRPC called Traffic Parrot which allows you to create gRPC mocks based on your Proto files. You can give it multiple proto files, store the mocks in Git and run the tool in Docker.
There are also open-source tools like GripMock but it does not generate stubs based on Proto files, you have to create them manually. Also, the project up to today was not keeping up to date with Proto and gRPC developments i.e. the package name issue you have discovered yourself above (works only if the package names in different proto files are the same). There are a few other open-source tools like grpc-wiremock, grpc-mock or bloomrpc-mock but they still lack widespread adoption and hence might be risky to adopt for an important enterprise project.
Keep in mind, the mock generated will be only a test double, it will not replicate the full behaviour of the system the Proto file corresponds to. If you wanted to also replicate partially the semantics of the messages consider doing a recording of the gRPC messages to create the mocks, that way you can see the sample data as well.
Take a look at this JS library which hopefully does what you need:
https://github.com/alenon/grpc-mock-server
Usage example:
private static readonly PROTO_PATH: string = __dirname + "example.proto";
private static readonly PKG_NAME: string = "com.alenon.example";
private static readonly SERVICE_NAME: string = "ExampleService";
...
const implementations = {
ex1: (call: any, callback: any) => {
const response: any =
new this.proto.ExampleResponse.constructor({msg: "the response message"});
callback(null, response);
},
};
this.server.addService(PROTO_PATH, PKG_NAME, SERVICE_NAME, implementations);
this.server.start();
I need to transform POST data to a recordset in an IBM i system. For this purpose I'm using API QtmhCvtDB.
https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzaie/rzaieapi_qtmhcvtdb.htm
But I'm doing it out of CGI stack, I'm doing it implementing command bus pattern, then I get:
Response code -5
This API is not valid when a program is not called by HTTP Server. No data parsing is done.
Is there something like QtmhCvtDB but executable from whole system?
I am new to AWS and I am trying to upload a pdf document to S3 trough an AWS API. I am using an HTML form with a post method. The action of the form is the URL of the deployed API. The API is integrated with a lambda function. My question is how can I extract the uploaded file to proceed within the lambda function, to perform some processing before uploading to S3. Is it even possible?
I have tried the instructions found in this post:
Passing HTTP Post from AWS API GW to Lambda
However, I return the event from the lambda function and this is what I get:
{file: file.pdf , acl:private,
success_action_redirect: http://localhost/, AWSAccessKeyId:my_aws_key}
The file I uploaded is called file.pdf.
Any guidance will be appreciated.
A pdf file is a binary format. API Gateway does not currently support binary data. We know that binary data does not work and there are no workarounds to make it work reliably. A number of customers have requested that we add binary support to API Gateway and it is prioritized on our backlog.
I have an R package that I would like to host through Amazon Web Services that will be accessible via an API. The script should take a couple of input values and return the R output in json format. Also, the API should be able to handle multiple requests simultaneously.
So for example, call http://sampleapi.com/?location=USA?state=Florida. That would then run the R package and return the output data to the calling application.
Has anyone done this before or know of resources you can point me to that would explain how to do so? Thanks!
Thanks for all the suggestions. I decided to use Ruby for the API with the rinruby and rails-api gems and will host that through AWS Elastic Beanstalk. See this question for how I am setting it up - Ruby API - Accept parameters and execute script
Im recording calls with Mixmonitor() application, it works fine, but recently i had a request to record each leg of call with addition to mixedfile. I know, that i can record each leg of call with Monitor() and then use external script tp mix it, but the problem is that it is additional loading of server. So I wonder can I do it by means of asterisk? For example by using Monitor and Mixmonitor together?
You can specify the call-legs with MixMonitor's parameters:
MixMonitor(mixed.wav,r(in.wav)t(out.wav))
as stated in the description:
asterisk*CLI> core show application MixMonitor
r(file): Use the specified file to record the *receive* audio feed.
Like with the basic filename argument, if an absolute path isn't given,
it will create the file in the configured monitoring directory.
t(file): Use the specified file to record the *transmit* audio feed.
Like with the basic filename argument, if an absolute path isn't given,
it will create the file in the configured monitoring directory.