gRPC Python server to client communication through function call - grpc-python

Let's say I am building a simple chat app using gRPC with the following .proto:
service Chat {
rpc SendChat(Message) returns (Void);
rpc SubscribeToChats(Void) returns (stream Message);
}
message Void {}
message Message {string text = 1;}
The way I often see the servicer implemented (in examples) in Python is like this:
class Servicer(ChatServicer):
def __init__(self):
self.messages = []
def SendChat(self, request, context):
self.messages.append(request.text)
return Void()
def SubscribeToChats(self, request, context):
while True:
if len(self.messages) > 0:
yield Message(text=self.messages.pop())
While this works, it seems very inefficient to spawn an infinite loop that continuously checks a condition for each connected client. It would be preferable to instead have something like this, where the send is triggered right as a message comes in and doesn't require any constant polling on a condition:
class Servicer(ChatServicer):
def __init__(self):
self.listeners = []
def SendChat(self, request, context):
for listener in self.listeners:
listener(Message(request.text))
return Void()
def SubscribeToChats(self, request, context, callback):
self.listeners.append(callback)
However, I can't seem to find a way to do something like this using gRPC.
I have the following questions:
Am I correct that an infinite loop is inefficient for a case like this? Or are there optimizations happening in the background that I'm not aware of?
Is there any efficient way to achieve something similar to my preferred solution above? It seems like a fairly common use case, so I'm sure there's something I'm missing.
Thanks in advance!

I figured out an efficient way to do it. The key is to use the AsyncIO API. Now my SubscribeToChats function can be an async generator, which makes things much easier.
Now I can use something like an asyncio Queue, which my function can await on in a while loop. Similar to this:
class Servicer(ChatServicer):
def __init__(self):
self.queue = asyncio.Queue()
async def SendChat(self, request, context):
await self.queue.put(request.text)
return Void()
async def SubscribeToChats(self, request, context):
while True:
yield Message(text=await self.queue.get())

Related

FastAPI Custom Websocket Object

I want to be able to create a custom WebSocket object rather than using Starlette's so that I can add some more things in the constructor and add some more methods. In FastAPI, you're able to subclass the APIRoute and pass in your own Request object. How would I do the same for the WebSocket router?
As you say, there doesn't seem to be an easy way to set the websocket route class (short of a lot of subclassing and rewriting). I think the simplest way to do this would be to define your own wrapper class around the websocket, taking whatever extra data you want, and then define the methods you need. Then you can inject that as a dependency, either with a separate function, or use the class itself as a dependency, see the documentation for details, which is what I'm doing below.
I've put together a minimal example, where the URL parameter name is passed to the wrapper class:
# main.py
from fastapi import Depends, FastAPI, WebSocket
app = FastAPI()
class WsWrapper:
def __init__(self, websocket: WebSocket, name: str) -> None:
self.name = name
self.websocket = websocket
# You can define all your custom logic here, I'm just adding a print
async def receive_json(self, mode: str = "text"):
print(f"Hello from {self.name}", flush=True)
return await self.websocket.receive_json(mode)
#app.websocket("/{name}")
async def websocket(ws: WsWrapper = Depends()):
await ws.websocket.accept()
while True:
data = await ws.receive_json()
print(data, flush=True)
You can test it by running uvicorn main:app and connecting to ws://localhost:8000/test, and it should print "Hello from test" when receiving JSON.
Ended up just monkeypatching the modules. Track this PR for when monkeypatching isn't necessary: https://github.com/tiangolo/fastapi/pull/4968
from typing import Callable
from fastapi import routing as fastapi_routing
from starlette._utils import is_async_callable
from starlette.concurrency import run_in_threadpool
from starlette.requests import Request as StarletteRequest
from starlette.websockets import WebSocket as StarletteWebSocket
from starlette.types import ASGIApp, Receive, Scope, Send
class Request(StarletteRequest):
pass
class WebSocket(StarletteWebSocket):
pass
def request_response(func: Callable) -> ASGIApp:
"""
Takes a function or coroutine `func(request) -> response`,
and returns an ASGI application.
"""
is_coroutine = is_async_callable(func)
async def app(scope: Scope, receive: Receive, send: Send) -> None:
request = Request(scope, receive=receive, send=send)
# Force all views to be a coroutine
response = await func(request)
if is_coroutine:
response = await func(request)
else:
response = await run_in_threadpool(func, request)
await response(scope, receive, send)
return app
fastapi_routing.request_response = request_response
def websocket_session(func: Callable) -> ASGIApp:
"""
Takes a coroutine `func(session)`, and returns an ASGI application.
"""
# assert asyncio.iscoroutinefunction(func), "WebSocket endpoints must be async"
async def app(scope: Scope, receive: Receive, send: Send) -> None:
session = WebSocket(scope, receive=receive, send=send)
await func(session)
return app
fastapi_routing.websocket_session = websocket_session

Is it possible to integrate GCP pub/sub SteamingPullFutures with discordpy?

I'd like to use a pub/sub StreamingPullFuture subscription with discordpy to receive instructions for removing users and sending updates to different servers.
Ideally, I would start this function when starting the discordpy server:
#bot.event
async def on_ready():
print(f'{bot.user} {bot.user.id}')
await pub_sub_function()
I looked at discord.ext.tasks but I don't think this use case fits since I'd like to handle irregularly spaced events dynamically.
I wrote this pub_sub_function() (based on the pub/sub python client docs) but it doesn't seem to be listening to pub/sub or return anything:
def pub_sub_function():
subscriber_client = pubsub_v1.SubscriberClient()
# existing subscription
subscription = subscriber_client.subscription_path(
'my-project-id', 'my-subscription')
def callback(message):
print(f"pubsub_message: {message}")
message.ack()
return message
future = subscriber_client.subscribe(subscription, callback)
try:
future.result()
except KeyboardInterrupt:
future.cancel() # Trigger the shutdown.
future.result() # Block until the shutdown is complete.
Has anyone done something like this? Is there a standard approach for sending data/messages from external services to a discordpy server and listening asynchronously?
Update: I got rid of pub_sub_function() and changed the code to this:
subscriber_client = pubsub_v1.SubscriberClient()
# existing subscription
subscription = subscriber_client.subscription_path('my-project-id', 'my-subscription')
def callback(message):
print(f"pubsub_message: {message}")
message.ack()
return message
#bot.event
async def on_ready():
print(f'{bot.user} {bot.user.id}')
await subscriber_client.subscribe(subscription, callback).result()
This works, sort of, but now the await subscriber_client.subscribe(subscription, callback).result() is blocking the discord bot, and returning this error:
WARNING discord.gateway Shard ID None heartbeat blocked for more than 10 seconds.
Loop thread traceback (most recent call last):
Ok, so this Github pr was very helpful.
In it, the user says that modifications are needed to make it work with asyncio because of Google's pseudo-future implementation:
Google implemented a custom, psuedo-future
need monkey patch for it to work with asyncio
But basically, to make the pub/sub future act like the concurrent.futures.Future, the discord.py implementation should be something like this:
async def pub_sub_function():
subscriber_client = pubsub_v1.SubscriberClient()
# existing subscription
subscription = subscriber_client.subscription_path('my-project-id', 'my-subscription')
def callback(message):
print(f"pubsub_message: {message}")
message.ack()
return message
future = subscriber_client.subscribe(subscription, callback)
# Fix the google pseduo future to behave like a concurrent Future:
future._asyncio_future_blocking = True
future.__class__._asyncio_future_blocking = True
real_pubsub_future = asyncio.wrap_future(future)
return real_pubsub_future
and then you need to await the function like this:
#bot.event
async def on_ready():
print(f'{bot.user} {bot.user.id}')
await pub_sub_function()

How to use asynchronous coroutines like a generator?

I want develop a web-socket watcher in python in such a way that when I send sth then it should wait until the response is received (sort of like blocking socket programming) I know it is weird, basically I want to make a command line python 3.6 tool that can communicate with the server WHILE KEEPING THE SAME CONNECTION LIVE for all the commands coming from user.
I can see that the below snippet is pretty typical using python 3.6.
import asyncio
import websockets
import json
import traceback
async def call_api(msg):
async with websockets.connect('wss://echo.websocket.org') as websocket:
await websocket.send(msg)
while websocket.open:
response = await websocket.recv()
return (response)
print(asyncio.get_event_loop().run_until_complete(call_api("test 1")))
print(asyncio.get_event_loop().run_until_complete(call_api("test 2")))
but this will creates a new ws connection for every command which defeats the purpose. One might say, you gotta use the async handler but I don't know how to synchronize the ws response with the user input from command prompt.
I am thinking if I could make the async coroutine (call_api) work like a generator where it has yield statement instead of return then I probably could do sth like beow:
async def call_api(msg):
async with websockets.connect('wss://echo.websocket.org') as websocket:
await websocket.send(msg)
while websocket.open:
response = await websocket.recv()
msg = yield (response)
generator = call_api("cmd1")
cmd = input(">>>")
while cmd != 'exit'
result = next(generator.send(cmd))
print(result)
cmd = input(">>>")
Please let me know your valuable comments.
Thank you
This can be achieved using an asynchronous generator (PEP 525).
Here is a working example:
import random
import asyncio
async def accumulate(x=0):
while True:
x += yield x
await asyncio.sleep(1)
async def main():
# Initialize
agen = accumulate()
await agen.asend(None)
# Accumulate random values
while True:
value = random.randrange(5)
print(await agen.asend(value))
asyncio.run(main())

How to make a couple of async method calls in django2.0

I am a doing a small project and decided to use Django2.0 and python3.6+.
In my django view, I want to call a bunch of REST API and get their results (in any order) and then process my request (saving something to database).
I know the right way to do would be to use aiohttp and define an async method and await on it.
I am confused about get_event_loop() and whether the view method should itself be an async method if it has to await the response from these methods.
Also does Django2.0 itself (being implemented in python3.6+) have a loop that I can just add to?
Here is the view I am envisioning
from rest_framework import generics
from aiohttp import ClientSession
class CreateView(generics.ListCreateAPIView):
def perform_create(self, serializer):
await get_rest_response([url1, url2])
async def fetch(url):
async with session.get(url) as response:
return await response.read()
async def get_rest_response(urls):
async with ClientSession() as session:
for i in range(urls):
task = asyncio.ensure_future(fetch(url.format(i), session))
tasks.append(task)
responses = await asyncio.gather(*tasks)
Technically you can do it by loop.run_until_complete() call:
class CreateView(generics.ListCreateAPIView):
def perform_create(self, serializer):
loop = asyncio.get_event_loop()
loop.run_until_complete(get_rest_response([url1, url2]))
But I doubt if this approach will significantly speed up your code.
Django is a synchronous framework anyway.

How to wrap asynchronous and gen functions together in Tornado?

How to wrap asynchronous and gen functions together in Tornado?
My code looks like below, the error is 'Future' object has no attribute 'body'.
Did I place the decorators in a wrong way?
import tornado.httpclient
import tornado.web
import tornado.gen
import tornado.httpserver
import tornado.ioloop
class Class1(tornado.web.RequestHandler):
#tornado.web.asynchronous
def post(self, *args, **kwargs):
url = self.get_argument('url', None)
response = self.json_fetch('POST', url, self.request.body)
self.write(response.body)
self.finish()
#tornado.gen.engine
def json_fetch(self, method, url, body=None, *args, **kwargs):
client = tornado.httpclient.AsyncHTTPClient()
headers = tornado.httputil.HTTPHeaders({"content-type": "application/json charset=utf-8"})
request = tornado.httpclient.HTTPRequest(url, method, headers, body)
yield tornado.gen.Task(client.fetch, request)
You don't need "asynchronous" in this code example. "gen.engine" is obsolete, use "coroutine" instead. You don't generally need to use "gen.Task" much these days, either. Make four changes to your code:
Wrap "post" in "coroutine"
"yield" the result of self.json_fetch instead of using the result directly.
No need to call "finish" in a coroutine, Tornado finishes the response when a coroutine completes.
Wrap json_fetch in "coroutine", too.
The result:
class ClubCreateActivity(tornado.web.RequestHandler):
#tornado.gen.coroutine
def post(self, *args, **kwargs):
url = self.get_argument('url', None)
response = yield self.json_fetch('POST', url, self.request.body)
self.write(response.body)
#tornado.gen.coroutine
def json_fetch(self, method, url, body=None, *args, **kwargs):
client = tornado.httpclient.AsyncHTTPClient()
headers = tornado.httputil.HTTPHeaders({"content-type": "application/json charset=utf-8"})
request = tornado.httpclient.HTTPRequest(url, method, headers, body)
response = yield client.fetch(request)
raise gen.Return(response)
Further reading:
The section on coroutines in the Tornado User's Guide
Tornado async request handlers
My article on refactoring coroutines
The recommended method in tornado official documentation is using #tornado.gen.coroutine and yield together.
If you want to use both asynchronous and advantage of yield, you should nesting #tornado.web.asynchronous decorator followed by #tornado.gen.engine
Documentation about "asynchronous call own function" but without additional external callback-function — Asynchronous and non-Blocking I/O
You can make your json_fetch like this:
from tornado.concurrent import Future
def json_fetch(self, method, url, body=None, *args, **kwargs):
http_client = tornado.httpclient.AsyncHTTPClient()
my_future = Future()
fetch_future = http_client.fetch(url)
fetch_future.add_done_callback(
lambda f: my_future.set_result(f.result()))
return my_future
Or like this (from A. Jesse Jiryu Davis's answer):
from tornado import gen
#gen.coroutine
def json_fetch(self, method, url, body=None, *args, **kwargs):
http_client = tornado.httpclient.AsyncHTTPClient()
headers = tornado.httputil.HTTPHeaders({"content-type": "application/json charset=utf-8"})
request = tornado.httpclient.HTTPRequest(url, method, headers, body)
response = yield http_client.fetch(request)
raise gen.Return(response)
* wrap "post" in "gen.coroutine" and "yield" call of json_fetch.
** "raise gen.Return(response)" for Python2 only, in Python3.3 and later you should write "return response".
Thanks to A. Jesse Jiryu Davis for link "Tornado async request handlers", "Asynchronous and non-Blocking I/O" was found there.

Resources