Waiting for specific message in gRPC stream of messages in Gatling load scripts - grpc

What I have: gRPC endpoint which sends a stream of message updates in response.
What I want to test: I want to create a load test that will open a stream, get multiple messages until a message with a specific field will not be captured (within defined timeout). (Without an explicit call of pause method and waiting for some time).
What I get from Gatling-gRPC:
After an explicit call of pause() method, the reconciliation method is returning the last captured message.
Example server: here
Protofile: here
Test code:
package load.api
import com.github.phisgr.gatling.generic.SessionCombiner
import com.github.phisgr.gatling.grpc.Predef._
import example.myapp.helloworld.grpc.hello_world.{GreeterServiceGrpc, HelloRequest}
import io.gatling.core.Predef._
import io.grpc.{CallOptions, Status}
import java.util.concurrent.TimeUnit
import scala.concurrent.duration.DurationInt
import scala.language.postfixOps
class StreamSim extends Simulation {
val greetCall = grpc("Greet")
.serverStream("greeter")
val scn = scenario(s"Greet Flow")
.exec(greetCall.start(GreeterServiceGrpc.METHOD_IT_KEEPS_REPLYING)
(HelloRequest("THIS IS TEST MESSAGE"))
.extract(_.some.get.message.some)(_ saveAs "message")
.sessionCombiner(SessionCombiner.pick("message"))
.endCheck(statusCode is Status.Code.OK)
)
.pause(10 seconds)
.exec(greetCall.copy(requestName = "Reconciliate greet").reconciliate)
.exec { session =>
println(s"%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%> Last character of the message: ${session.attributes("message")}")
session
}
setUp(scn.inject(rampUsers(1) during (10 seconds))
.protocols(grpc(managedChannelBuilder(name = "127.0.0.1", port = 8080).usePlaintext()).shareChannel.disableWarmUp))
}

Related

FastAPI Custom Websocket Object

I want to be able to create a custom WebSocket object rather than using Starlette's so that I can add some more things in the constructor and add some more methods. In FastAPI, you're able to subclass the APIRoute and pass in your own Request object. How would I do the same for the WebSocket router?
As you say, there doesn't seem to be an easy way to set the websocket route class (short of a lot of subclassing and rewriting). I think the simplest way to do this would be to define your own wrapper class around the websocket, taking whatever extra data you want, and then define the methods you need. Then you can inject that as a dependency, either with a separate function, or use the class itself as a dependency, see the documentation for details, which is what I'm doing below.
I've put together a minimal example, where the URL parameter name is passed to the wrapper class:
# main.py
from fastapi import Depends, FastAPI, WebSocket
app = FastAPI()
class WsWrapper:
def __init__(self, websocket: WebSocket, name: str) -> None:
self.name = name
self.websocket = websocket
# You can define all your custom logic here, I'm just adding a print
async def receive_json(self, mode: str = "text"):
print(f"Hello from {self.name}", flush=True)
return await self.websocket.receive_json(mode)
#app.websocket("/{name}")
async def websocket(ws: WsWrapper = Depends()):
await ws.websocket.accept()
while True:
data = await ws.receive_json()
print(data, flush=True)
You can test it by running uvicorn main:app and connecting to ws://localhost:8000/test, and it should print "Hello from test" when receiving JSON.
Ended up just monkeypatching the modules. Track this PR for when monkeypatching isn't necessary: https://github.com/tiangolo/fastapi/pull/4968
from typing import Callable
from fastapi import routing as fastapi_routing
from starlette._utils import is_async_callable
from starlette.concurrency import run_in_threadpool
from starlette.requests import Request as StarletteRequest
from starlette.websockets import WebSocket as StarletteWebSocket
from starlette.types import ASGIApp, Receive, Scope, Send
class Request(StarletteRequest):
pass
class WebSocket(StarletteWebSocket):
pass
def request_response(func: Callable) -> ASGIApp:
"""
Takes a function or coroutine `func(request) -> response`,
and returns an ASGI application.
"""
is_coroutine = is_async_callable(func)
async def app(scope: Scope, receive: Receive, send: Send) -> None:
request = Request(scope, receive=receive, send=send)
# Force all views to be a coroutine
response = await func(request)
if is_coroutine:
response = await func(request)
else:
response = await run_in_threadpool(func, request)
await response(scope, receive, send)
return app
fastapi_routing.request_response = request_response
def websocket_session(func: Callable) -> ASGIApp:
"""
Takes a coroutine `func(session)`, and returns an ASGI application.
"""
# assert asyncio.iscoroutinefunction(func), "WebSocket endpoints must be async"
async def app(scope: Scope, receive: Receive, send: Send) -> None:
session = WebSocket(scope, receive=receive, send=send)
await func(session)
return app
fastapi_routing.websocket_session = websocket_session

In gRPC python, the Service class and the Serve method are always in the same file, why?

In gRPC python, the Service-class and the Serve method are always in the same file, why?
for example -
the service class - link and the serve() - link
Similarly - service-class and serve()
I am new to python and grpc. In my project I wrote the serve method in different file importing the service-class, the server seems started but when I invoke it from client code (postman)
it doesn't work
Here is my code - ems_validator_service.py contains the service class
and main.py has the serve() method
File: ems_validator_service.py -
from validator.src.grpc import ems_validator_service_pb2
from validator.src.grpc.ems_validator_service_pb2_grpc import EmsValidatorServiceServicer
class EmsValidatorServiceServicer(EmsValidatorServiceServicer):
def Validate(self, request, context):
# TODO: logic to validate
return ems_validator_service_pb2.GetStatusResponse(
validation_status=ems_validator_service_pb2.VALIDATION_STATUS_IN_PROGRESS)
def GetStatus(self, request, context):
# TODO: logic to get actual status
return ems_validator_service_pb2.GetStatusResponse(
validation_status=ems_validator_service_pb2.VALIDATION_STATUS_IN_PROGRESS)
File: main.py -
from validator.src.grpc.ems_validator_service_pb2_grpc import (
EmsValidatorServiceServicer,
add_EmsValidatorServiceServicer_to_server
)
from concurrent import futures
import grpc
def serve():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
add_EmsValidatorServiceServicer_to_server(EmsValidatorServiceServicer(), server)
server.add_insecure_port('localhost:50051') # todo change it
server.start()
server.wait_for_termination()
if __name__ == "__main__":
serve()
For the above code I can't invoke the rpc.
but If I move the serve method to the ems_validator_service.py file and call that method from main.py then it works fine. Not sure if it is a python thing or gRPC thing?
The error I get from client.py -
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNIMPLEMENTED
details = "Method not implemented!"
debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:50051 {created_time:"2022-10-19T22:10:18.898439-07:00", grpc_status:12, grpc_message:"Method not implemented!"}"
>
As already mentioned, the same client works fine if I move the above serve() method to the service class
These are just simple examples. It's totally fine to call serve() from a different file.

Trace failed fastapi requests with opencensus

I'm using opencensus-python to track requests to my python fastapi application running in production, and exporting the information to Azure AppInsights using the opencensus exporters. I followed the Azure Monitor docs and was helped out by this issue post which puts all the necessary bits in a useful middleware class.
Only to realize later on that requests that caused the app to crash, i.e. unhandled 5xx type errors, would never be tracked, since the call to execute the logic for the request fails before any tracing happens. The Azure Monitor docs only talk about tracking exceptions through the logs, but this is separate from the tracing of requests, unless I'm missing something. I certainly wouldn't want to lose out on failed requests, these are super important to track! I'm accustomed to using the "Failures" tab in app insights to monitor any failing requests.
I figured the way to track these requests is to explicitly handle any internal exceptions using try/catch and export the trace, manually setting the result code to 500. But I found it really odd that there seems to be no documentation of this, on opencensus or Azure.
The problem I have now is: this middleware function is expected to pass back a "response" object, which fastapi then uses as a callable object down the line (not sure why) - but in the case where I caught an exception in the underlying processing (i.e. at await call_next(request)) I don't have any response to return. I tried returning None but this just causes further exceptions down the line (None is not callable).
Here is my version of the middleware class - its very similar to the issue post I linked, but I'm try/catching over await call_next(request) rather than just letting it fail unhanded. Scroll down to the final 5 lines of code to see that.
import logging
from fastapi import Request
from opencensus.trace import (
attributes_helper,
execution_context,
samplers,
)
from opencensus.ext.azure.trace_exporter import AzureExporter
from opencensus.trace import span as span_module
from opencensus.trace import tracer as tracer_module
from opencensus.trace import utils
from opencensus.trace.propagation import trace_context_http_header_format
from opencensus.ext.azure.log_exporter import AzureLogHandler
from starlette.types import ASGIApp
from src.settings import settings
HTTP_HOST = attributes_helper.COMMON_ATTRIBUTES["HTTP_HOST"]
HTTP_METHOD = attributes_helper.COMMON_ATTRIBUTES["HTTP_METHOD"]
HTTP_PATH = attributes_helper.COMMON_ATTRIBUTES["HTTP_PATH"]
HTTP_ROUTE = attributes_helper.COMMON_ATTRIBUTES["HTTP_ROUTE"]
HTTP_URL = attributes_helper.COMMON_ATTRIBUTES["HTTP_URL"]
HTTP_STATUS_CODE = attributes_helper.COMMON_ATTRIBUTES["HTTP_STATUS_CODE"]
module_logger = logging.getLogger(__name__)
module_logger.addHandler(AzureLogHandler(
connection_string=settings.appinsights_connection_string
))
class AppInsightsMiddleware:
"""
Middleware class to handle tracing of fastapi requests and exporting the data to AppInsights.
Most of the code here is copied from a github issue: https://github.com/census-instrumentation/opencensus-python/issues/1020
"""
def __init__(
self,
app: ASGIApp,
excludelist_paths=None,
excludelist_hostnames=None,
sampler=None,
exporter=None,
propagator=None,
) -> None:
self.app = app
self.excludelist_paths = excludelist_paths
self.excludelist_hostnames = excludelist_hostnames
self.sampler = sampler or samplers.AlwaysOnSampler()
self.propagator = (
propagator or trace_context_http_header_format.TraceContextPropagator()
)
self.exporter = exporter or AzureExporter(
connection_string=settings.appinsights_connection_string
)
async def __call__(self, request: Request, call_next):
# Do not trace if the url is in the exclude list
if utils.disable_tracing_url(str(request.url), self.excludelist_paths):
return await call_next(request)
try:
span_context = self.propagator.from_headers(request.headers)
tracer = tracer_module.Tracer(
span_context=span_context,
sampler=self.sampler,
exporter=self.exporter,
propagator=self.propagator,
)
except Exception:
module_logger.error("Failed to trace request", exc_info=True)
return await call_next(request)
try:
span = tracer.start_span()
span.span_kind = span_module.SpanKind.SERVER
span.name = "[{}]{}".format(request.method, request.url)
tracer.add_attribute_to_current_span(HTTP_HOST, request.url.hostname)
tracer.add_attribute_to_current_span(HTTP_METHOD, request.method)
tracer.add_attribute_to_current_span(HTTP_PATH, request.url.path)
tracer.add_attribute_to_current_span(HTTP_URL, str(request.url))
execution_context.set_opencensus_attr(
"excludelist_hostnames", self.excludelist_hostnames
)
except Exception: # pragma: NO COVER
module_logger.error("Failed to trace request", exc_info=True)
try:
response = await call_next(request)
tracer.add_attribute_to_current_span(HTTP_STATUS_CODE, response.status_code)
tracer.end_span()
return response
# Explicitly handle any internal exception here, and set status code to 500
except Exception as exception:
module_logger.exception(exception)
tracer.add_attribute_to_current_span(HTTP_STATUS_CODE, 500)
tracer.end_span()
return None
I then register this middleware class in main.py like so:
app.middleware("http")(AppInsightsMiddleware(app, sampler=samplers.AlwaysOnSampler()))
Explicitly handle any exception that may occur in processing the API request. That allows you to finish tracing the request, setting the status code to 500. You can then re-throw the exception to ensure that the application raises the expected exception.
try:
response = await call_next(request)
tracer.add_attribute_to_current_span(HTTP_STATUS_CODE, response.status_code)
tracer.end_span()
return response
# Explicitly handle any internal exception here, and set status code to 500
except Exception as exception:
module_logger.exception(exception)
tracer.add_attribute_to_current_span(HTTP_STATUS_CODE, 500)
tracer.end_span()
raise exception

How to send HTTP request when "log.Fatal()" is executed?

In summary I want to send system information to my HTTP server when the "log.Fatal()" is called without any extra code for every log statement. Changing/overwriting the default behaviour of Info, Fatal etc. would be fantastic.
In Python, there is a way to add HTTP handlers to default logging library which in turn sends a POST HTTP request on log emit.
You can create a wrapper module for builtin log
yourproject/log/log.go
package log
import goLog "log"
func Fatal(v ...interface{}) {
goLog.Fatal(v...)
// send request ...
// reqQueue <- some args
}
replace log module with the wrapper in your project
// import "log"
import "yourproject/log"
func Foo() {
log.Fatal(err)
}
Try creating a type that wraps the standard Logger type, but with your desired enhancement. Then by creating an instance of it called "log" which wraps the default logger, you can continue to use logging in your code in the same way with minimal changes required (since it will have the same name as the log package, and retain *all of the methods).
package main
import _log "log"
type WrappedLogger struct {
// This field has no name, so we retain all the Logger methods
*_log.Logger
}
// here we override the behaviour of log.Fatal
func (l *WrappedLogger) Fatal(v ...interface{}) {
l.Println("doing the HTTP request")
/// do HTTP request
// now call the original Fatal method from the underlying logger
l.Logger.Fatal(v...)
}
// wrapping the default logger, but adding our new method.
var log = WrappedLogger{_log.Default()}
func main() {
// notice we can still use Println
log.Println("hello")
// but now Fatal does the special behaviour
log.Fatal("fatal log")
}
*The only gotcha here is that we've replaced the typical log package with a log instance. In many ways, it behaves the same, since most of the functions in the log package are set up as forwards to the default Logger instance for convenience.
However, this means that our new log won't have access to the "true" functions from the log package, such as log.New. For that, you will need to reference the alias to the original package.
// want to create a new logger?
_log.New(out, prefix, flag)

Nimlang: Async program does not compile

I'm trying to write a HTTP server that sends a HTTP request and returns the content to client.
Here is the code:
import asynchttpserver, asyncdispatch
import httpClient
let client = newHttpClient()
var server = newAsyncHttpServer()
proc cb(req: Request) {.async.} =
let content = client.getContent("http://google.com")
await req.respond(Http200, content)
waitFor server.serve(Port(8080), cb)
However, I obtain the following compile error message (nim v1.0.0):
Error: type mismatch: got <AsyncHttpServer, Port, proc (req: Request): Future[system.void]{.locks: <unknown>.}>
but expected one of:
proc serve(server: AsyncHttpServer; port: Port;
callback: proc (request: Request): Future[void] {.closure, gcsafe.};
address = ""): owned(Future[void])
first type mismatch at position: 3
required type for callback: proc (request: Request): Future[system.void]{.closure, gcsafe.}
but expression 'cb' is of type: proc (req: Request): Future[system.void]{.locks: <unknown>.}
This expression is not GC-safe. Annotate the proc with {.gcsafe.} to get extended error information.
expression: serve(server, Port(8080), cb)
The serve function expects another expression but do not know how to fix it.
Surprisingly, the code compiles perfectly fine when I remove the HTTP request from the server callback "cb". Does this mean that the serve function expects different callback expressions depending on the callback body ?
OK the problem is that the HttpClient is a global variable and is used in the callback function "cb". As a result the callback function is not GC safe.
So it is enough to instantiate the HttpClient within the callback function:
import asynchttpserver, asyncdispatch
import httpClient
var server = newAsyncHttpServer()
proc cb(req: Request) {.async.} =
let client = newHttpClient()
let content = client.getContent("https://google.com")
await req.respond(Http200, content)
waitFor server.serve(Port(8080), cb)

Resources