What I want to do
I have an HTTP API service, written in Flask, which is a template used to build instances of different services. As such, this template needs to be generalizable to handle use cases that do and do not include Kafka consumption.
My goal is to have an optional Kafka consumer running in the background of the API template. I want any service that needs it to be able to read data from a Kafka topic asynchronously, while also independently responding to HTTP requests as it usually does. These two processes (Kafka consuming, HTTP request handling) aren't related, except that they'll be happening under the hood of the same service.
What I've written
Here's my setup:
# ./create_app.py
from flask_socketio import SocketIO
socketio = None
def create_app(kafka_consumer_too=False):
"""
Return a Flask app object, with or without a Kafka-ready SocketIO object as well
"""
app = Flask('my_service')
app.register_blueprint(special_http_handling_blueprint)
if kafka_consumer_too:
global socketio
socketio = SocketIO(app=app, message_queue='kafka://localhost:9092', channel='some_topic')
from .blueprints import kafka_consumption_blueprint
app.register_blueprint(kafka_consumption_blueprint)
return app, socketio
return app
My run.py is:
# ./run.py
from . import create_app
app, socketio = create_app(kafka_consumer_too=True)
if __name__=="__main__":
socketio.run(app, debug=True)
And here's the Kafka consumption blueprint I've written, which is where I think it should be handling the stream events:
# ./blueprints/kafka_consumption_blueprint.py
from ..create_app import socketio
kafka_consumption_blueprint = Blueprint('kafka_consumption', __name__)
#socketio.on('message')
def handle_message(message):
print('received message: ' + message)
What it currently does
With the above, my HTTP requests are being handled fine when I curl localhost:5000. The problem is that, when I write to the some_topic Kafka topic (on port 9092), nothing is showing up. I have a CLI Kafka consumer running in another shell, and I can see that the messages I'm sending on that topic are showing up. So it's the Flask app that's not reacting: no messages are being consumed by handle_message().
What am I missing here? Thanks in advance.
I think you are interpreting the meaning of the message_queue argument incorrectly.
This argument is used when you have multiple server instances. These instances communicate with each other through the configured message queue. This queue is 100% internal, there is nothing that you are a user of the library can do with the message queue.
If you wanted to build some sort of pub/sub mechanism, then you have to implement the listener for that in your application.
Related
I am running a development server locally
python manage.py runserver 8000
Then I run a script which consumes the Consumer below
from channels.generic.websocket import AsyncJsonWebsocketConsumer
class MyConsumer(AsyncJsonWebsocketConsumer):
async def connect(self):
import time
time.sleep(99999999)
await self.accept()
Everything runs fine and the consumer sleeps for a long time as expected. However I am not able to access http://127.0.0.1:8000/ from the browser.
The problem is bigger in real life since the the consumer needs to make a HTTP request to the same server - and essentially ends up in a deadlock.
Is this the expected behaviour? How do I allow calls to my server while a slow consumer is running?
since this is an async function you should but using asyncio's sleep.
import asyncio
from channels.generic.websocket import AsyncJsonWebsocketConsumer
class MyConsumer(AsyncJsonWebsocketConsumer):
async def connect(self):
await asyncio.sleep(99999999)
await self.accept()
if you use time.sleep you will sleep the entire python thread.
this also applies to when you make your upstream HTTP request you need to use an asyncio http library not a synchronise library. (basically you should be awaiting anything that is expected to take any time)
I'm trying to learn gRPC and implemented the same code as in the tutorial. Wondering how to add gRPC health check to it.
Stumbled upon this, but clueless on how to write a gRPC health check.
I found this after many hours of search. To health check gRPC server, you have to add healthCheckService to your existing server. So the existing server will have multiple services running on it.
Example of how to add multiple services in same server is exaplained here.
Sample server:
# pip install grpcio-health-checking
from grpc_health.v1 import health
from grpc_health.v1 import health_pb2
from grpc_health.v1 import health_pb2_grpc
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
# your normal service, that the server is supposed to run
helloworld_pb2.add_GreeterServicer_to_server(_GreeterServicer(), server)
# health check service - add this service to server
health_pb2_grpc.add_HealthServicer_to_server(
health.HealthServicer(), server)
server.add_insecure_port('[::]:50051')
server.start()
The accompanying unit tests serve as a pretty good reference.
def start_server(self, non_blocking=False, thread_pool=None):
self._thread_pool = thread_pool
self._servicer = health.HealthServicer(
experimental_non_blocking=non_blocking,
experimental_thread_pool=thread_pool)
self._servicer.set('', health_pb2.HealthCheckResponse.SERVING)
self._servicer.set(_SERVING_SERVICE,
health_pb2.HealthCheckResponse.SERVING)
self._servicer.set(_UNKNOWN_SERVICE,
health_pb2.HealthCheckResponse.UNKNOWN)
self._servicer.set(_NOT_SERVING_SERVICE,
health_pb2.HealthCheckResponse.NOT_SERVING)
self._server = test_common.test_server()
port = self._server.add_insecure_port('[::]:0')
health_pb2_grpc.add_HealthServicer_to_server(
self._servicer, self._server)
self._server.start()
Keep a reference to the health servicer in the main servicer class for your application server. Then, call the set() method on it at at the appropriate times, e.g. when startup is finished or when it goes into a state in which it is unable to server requests. Do note however that this makes use of some experimental features in order to ensure that the attached health servicer does not cause application-level requests to be starved.
I found this example really helpful:
https://github.com/grpc/grpc/tree/master/examples/python/xds
Specifically, server.py provides an implementation of health checking as well as reflection.
I was testing a grpc bidrectional streaming application which is in below link and it is working fine.
https://github.com/melledijkstra/python-grpc-chat
Do we have any tool to trigger bidirectional streaming say 100 requests per second as we use wrk/jmeter for rest/http APIs?
I tried exposing the API (run) into rest and triggering 100 req/sec using wrk tool. Seems to be not a proper approach.
#app.route('/', methods=['GET'])
def send_message(self, event):
"""
This method is called when user enters something into the textbox
"""
message = self.entry_message.get()
if message is not '':
n = chat.Note()
n.name = self.username
n.message = message
print("S[{}] {}".format(n.name, n.message))
self.conn.SendNote(n)
Complete code is the actual grpc chat application: https://github.com/melledijkstra/python-grpc-chat
I wanted to do a load testing of a grpc bidirectional streaming application to send 100 requests per second to a server. Is there a possible approach to it?
This is to test that my server can handle the enough load with this chat functionality.
I have the following code in index.js file located into functions folder in my google firebase proyect:
net=require('express')()
net.get('/',function(req,res){res.sendFile(__dirname+'/slave.htm')})
exports.run=require('firebase-functions').https.onRequest(net)
require('socket.io').listen(net).on("connection",function(socket){})
But when I execute \gfp1>firebase deploy in command prompt, this give me that errors:
You are trying to attach socket.io to an express request handler function. Please, pass a http.Server instance.
Yes, and I pass and http server instance in the following code:
net=require('firebase-functions').https.onRequest((req,res)=>{res.send("socket.io working!")})
exports.run=net;require('socket.io').listen(net).on("connection",function(socket){})
It gives me again the following error:
You are trying to attach socket.io to an express request handler function. Please, pass a http.Server instance.
And I try attaching socket.io to firebase functions with that code:
net=https.onRequest((req,res)=>{res.send("socket.io working!")})
exports.run=require('firebase-functions').net
require('socket.io').listen(require('firebase-functions').net).on("connection",function(socket){})
And that gives this error:
https is not defined
When I run this code in localhost:
app=require('express')()
app.get('/',function(req,res){res.sendFile(__dirname+'/slave.htm')})
net=require('http').createServer(app);net.listen(8888,function(){console.log("Server listening.")})
require('socket.io').listen(net).on("connection",function(socket){})
The console emmit \gfp>Server listening., and when I go to url http://127.0.0.1:8888, it works sending an html file to navigator, as I expected:
<script>
document.write("File system working!")
document.body.style.backgroundColor="black"
document.body.style.color="white"
</script>
But the problem happens when I try to convert net=require('http').createServer(app);net.listen(8888,function(){console.log("Server listening.")}) to net=exports.run=require('firebase-functions').https.onRequest((req,res)=>{res.send("Firebase working!")}), it seems to be impossible.
You can't run code to listen on some port with Cloud Functions. This is because you aren't guaranteed to have a single machine or instance running your code. It could be distributed among many instances all running concurrently. You shouldn't know or care if that happens - Cloud Functions will just scale to meet the needs placed on your functions.
If you deploy an HTTP type function, it will automatically listen on the https port for the dedicated host for your project, and you can send web requests to that.
If you want to perform transactions over a persistently held socket, use the realtime database client write values in the database, then respond to those writes with a database trigger function that you write. That function can send data back to the client by writing something back to the database in a location that's being listened to by the client.
Background:
We have a Python web application which uses SqlAlchemy as ORM. We run this application with Gunicorn(sync worker) currently. This application is only used to respond LONG RUNNING REQUESTS (i.e. serving big files, please don't advise using X-Sendfile/X-Accel-Redirect because the response is generated dynamically from Python app).
With Gunicorn sync workers, when we run 8 workers only 8 request is served simulatenously. Since all of these responses are IO bound, we want to switch to asyncronous worker type to get better throughput.
We have switched the worker type from sync to eventlet in Gunicorn configuration file. Now we can respond all of the requests simultaneously but another mysterious (mysterious to me) problem has occured.
In the application we have a scoped session object in module level. Following code is from our orm.py file:
uri = 'mysql://%s:%s#%s/%s?charset=utf8&use_unicode=1' % (\
config.MYSQL_USER,
config.MYSQL_PASSWD,
config.MYSQL_HOST,
config.MYSQL_DB,
)
engine = create_engine(uri, echo=False)
session = scoped_session(sessionmaker(
autocommit=False,
autoflush=False,
bind=engine,
query_cls=CustomQuery,
expire_on_commit=False
))
Our application uses the session like this:
from putio.models import session
f = session.query(File).first()
f.name = 'asdf'
session.add(f)
session.commit()
While we using sync worker, session was used from 1 request at a time. After we have switched to async eventlet worker, all requests in the same worker share the same session which is not desired. When the session is commited in one request, or an exception is happened, all other requests fail because the session is shared.
In documents of SQLAlchemy, said that scoped_session is used for seperated sessions in threaded environments. AFAIK requests in async workers run in same thread.
Question:
We want seperate sessions for each request in async worker. What is the correct way of using session with async workers in SQLAlchemy?
Use scoped_session's scopefunc argument.