How to unit test a gRPC server's asynchronous methods? - grpc-python

Following the suggestions in this question I was able to unit test the synchronous methods of my gRPC service (which is built with the grpc.aio API) using the grpc_testing library. However, when I follow this example on an asynchronous method of my gRPC service I get:
ERROR grpc_testing._server._rpc:_rpc.py:92 Exception calling application!
Traceback (most recent call last):
File "/home/jp/venvs/grpc/lib/python3.8/site-packages/grpc_testing/_server/_service.py", line 63, in _stream_response
response = copy.deepcopy(next(response_iterator))
TypeError: 'async_generator' object is not an iterator
Looking through the grpc_testing codebase and searching more broadly, I cannot find examples of unit testing async gRPC methods. The closest thing I could find is an unmerged branch of pytest-grpc, but the example service does not have any async methods.
Can anyone share an example of unit testing an asynchronous gRPC method in python?

I followed #Lidi's recommendations (thank you) and implemented the tests using pytest-asyncio. For what it's worth, here is a basic example testing an async stream stream method:
import mock
import grpc
import pytest
from tokenize_pb2 import MyMessage
from my_implementation import MyService
async def my_message_generator(messages):
for message in messages:
yield message
#pytest.mark.asyncio
async def test_my_async_stream_stream_method():
service = MyService()
my_messages = my_message_generator([MyMessage(), MyMessage()])
mock_context = mock.create_autospec(spec=grpc.aio.ServicerContext)
response = service.MyStreamStreamMethod(my_messages, mock_context)
results = [x async for x in response]
assert results == expected_results

gRPC Testing is a nice project. But we need engineering resources to make it support asyncio, and mostly importantly, adopt the existing APIs to asyncio's philosophy.
For testing gRPC asyncio, I would recommend just use pytest which has pytest-asyncio to smoothly test out asyncio features. Here is an example: code.

The solution given in Joshua's answer also works with python unittest framework utilizing the unittest.IsolatedAsyncioTestCase class. For example:
import mock
import grpc
import unittest
from example_pb2 import MyRequestMessage
from my_implementation import MyService
class Tests(unittest.IsolatedAsyncioTestCase):
def setUp(self):
self.service = MyService()
self.mock_context = mock.create_autospec(spec=grpc.aio.ServicerContext)
async def test_subscribe_unary_stream(self):
response = self.service.MyUnaryStreamMethod(MyRequestMessage(), self.mock_context)
async for result in response:
self.assertIsNotNone(result)
While this allows testing of the actual business logic in the RPC functions, it falls short of grpcio-testing in terms of service features like response codes, timeouts etc.

Related

Is there a way of using Gremlin within an asyncio Python application?

The TinkerPop documentation describes GLV for Python. However, the examples presented in there are built around synchronous code. There is the aiogremlin library that was desingned to enable use of Gremlin in Python's asyncio code. Unfortunately, the project seem to be discontinued.
Does the official GLV support the asyncio or is there a way to use Gremlin in asynchronous Python applications?
I noticed that this question has sat unanswered so here goes...
The Gremlin Python client today uses Tornado. That may change in the future to just use aiohttp. Getting the event loops to play nicely together can be tricky. The easiest way I have found is to use the nest-asyncio library. With that installed you can write something like this. I don't show the g being created but this code assumes the connection to the server has been made and that g is the Graph Traversal Source.
import nest_asyncio
nest_asyncio.apply()
async def count_airports():
c = g.V().hasLabel('airport').count().next()
print(c)
async def run_tests(g):
await count_airports()
return
asyncio.run(run_tests(g))
As you mentioned the other option is to use something like aiogremlin.
Any latest version of the gremlin library supports async code but implementation doesn't seem to be straight, as for making the queries gremlinAsync uses future (Future Documentation)
To Make it easy to read and implement you can convert the Future object return by gremlin API to coroutine which support's await syntax
async def get_result(query, client):
result = await asyncio.wrap_future(client.submitAsync(query))
return result
client = gremlin_connection(environ.get("url"),environ.get("username"),environ.get("password"))
data = await get_result(query1, client)

Channels consumer blocks normal HTTP in Django?

I am running a development server locally
python manage.py runserver 8000
Then I run a script which consumes the Consumer below
from channels.generic.websocket import AsyncJsonWebsocketConsumer
class MyConsumer(AsyncJsonWebsocketConsumer):
async def connect(self):
import time
time.sleep(99999999)
await self.accept()
Everything runs fine and the consumer sleeps for a long time as expected. However I am not able to access http://127.0.0.1:8000/ from the browser.
The problem is bigger in real life since the the consumer needs to make a HTTP request to the same server - and essentially ends up in a deadlock.
Is this the expected behaviour? How do I allow calls to my server while a slow consumer is running?
since this is an async function you should but using asyncio's sleep.
import asyncio
from channels.generic.websocket import AsyncJsonWebsocketConsumer
class MyConsumer(AsyncJsonWebsocketConsumer):
async def connect(self):
await asyncio.sleep(99999999)
await self.accept()
if you use time.sleep you will sleep the entire python thread.
this also applies to when you make your upstream HTTP request you need to use an asyncio http library not a synchronise library. (basically you should be awaiting anything that is expected to take any time)

Flask-socketIO + Kafka as a background process

What I want to do
I have an HTTP API service, written in Flask, which is a template used to build instances of different services. As such, this template needs to be generalizable to handle use cases that do and do not include Kafka consumption.
My goal is to have an optional Kafka consumer running in the background of the API template. I want any service that needs it to be able to read data from a Kafka topic asynchronously, while also independently responding to HTTP requests as it usually does. These two processes (Kafka consuming, HTTP request handling) aren't related, except that they'll be happening under the hood of the same service.
What I've written
Here's my setup:
# ./create_app.py
from flask_socketio import SocketIO
socketio = None
def create_app(kafka_consumer_too=False):
"""
Return a Flask app object, with or without a Kafka-ready SocketIO object as well
"""
app = Flask('my_service')
app.register_blueprint(special_http_handling_blueprint)
if kafka_consumer_too:
global socketio
socketio = SocketIO(app=app, message_queue='kafka://localhost:9092', channel='some_topic')
from .blueprints import kafka_consumption_blueprint
app.register_blueprint(kafka_consumption_blueprint)
return app, socketio
return app
My run.py is:
# ./run.py
from . import create_app
app, socketio = create_app(kafka_consumer_too=True)
if __name__=="__main__":
socketio.run(app, debug=True)
And here's the Kafka consumption blueprint I've written, which is where I think it should be handling the stream events:
# ./blueprints/kafka_consumption_blueprint.py
from ..create_app import socketio
kafka_consumption_blueprint = Blueprint('kafka_consumption', __name__)
#socketio.on('message')
def handle_message(message):
print('received message: ' + message)
What it currently does
With the above, my HTTP requests are being handled fine when I curl localhost:5000. The problem is that, when I write to the some_topic Kafka topic (on port 9092), nothing is showing up. I have a CLI Kafka consumer running in another shell, and I can see that the messages I'm sending on that topic are showing up. So it's the Flask app that's not reacting: no messages are being consumed by handle_message().
What am I missing here? Thanks in advance.
I think you are interpreting the meaning of the message_queue argument incorrectly.
This argument is used when you have multiple server instances. These instances communicate with each other through the configured message queue. This queue is 100% internal, there is nothing that you are a user of the library can do with the message queue.
If you wanted to build some sort of pub/sub mechanism, then you have to implement the listener for that in your application.

running python application with asyncio async and await

I am trying to use asyncio and keywords await/async with python 3.5
I'm fairly new to asynchronous programming in python. Most of my experience with it has been with NodeJS. I seem to be doing everything right except for calling my startup function to initiate the program.
below is some fictitious code to water it down to where my confusion is because my code base is rather large and consists of several local modules.
import asyncio
async def get_data():
foo = await <retrieve some data>
return foo
async def run():
await get_data()
run()
but I recieve this asyncio exception:
runtimeWarning: coroutine 'run' was never awaited
I understand what this error is telling me but I'm confused as to how I am supposed to await a call to the function in order to run my program.
You should create event loop manually and run coroutine in it, like shown in documentation:
import asyncio
async def hello_world():
print("Hello World!")
loop = asyncio.get_event_loop()
loop.run_until_complete(hello_world())
loop.close()

AsyncProcessor equivalent in Spring Integration

In Camel, there is an AsyncProcessor . Is there any equivalent in Spring Integration
There are several components in Spring Integration which deal with the async hands-off.
The #MessagingGateway can be configured with the ListenableFuture what is fully similar to mentioned AsyncProcessor in the Apache Camel: http://docs.spring.io/spring-integration/docs/4.3.10.RELEASE/reference/html/messaging-endpoints-chapter.html#async-gateway.
Also this Gateway can have Mono return type for Reactive Streams manner of async processing.
For simple thread shifting and parallel processing there is an ExecutorChannel. The PublishSubscribeChannel also can be configured with the TaskExecutor for parallelism: http://docs.spring.io/spring-integration/docs/4.3.10.RELEASE/reference/html/messaging-channels-section.html#channel-configuration.
The QueueChannel can also be used for some kind of async tasks.
At the same time any POJO invocation component (e.g. #ServiceActivator) can simply deal with the ListenableFuture as a return from the underlying POJO and perform similar async callback work: http://docs.spring.io/spring-integration/docs/4.3.10.RELEASE/reference/html/messaging-endpoints-chapter.html#async-service-activator

Resources