Will gevent speed up pymongo connection - nginx

I am using bottle with pymongo. My server is nginx and uwsgi.
Will gevent make my pymongo run async(What I mean multithread) by just using the code below?
from gevent import monkey; monkey.patch_socket()
My reference:
http://api.mongodb.org/python/current/examples/gevent.html
Update:
I have updated the uwsgi.ini:
[uwsgi]
plugins=python
socket=/tmp/uwsgi.myapp.socketpython
path=/var/www/myapp
gevent = 100
Am I doing it correctly?

You have to enable gevent mode in uWSGI too
http://uwsgi-docs.readthedocs.org/en/latest/Gevent.html
then use monkey.patch_all() instead of monkey.patch_socket() as uWSGI is a native gevent application and do not use its monkey patch features by default.

Related

Django can only handle ASGI/HTTP connections, not lifespan. in uvicorn

The problem:
ValueError generic ASGI request Unhandled
Django can only handle ASGI/HTTP connections, not lifespan.
I'm using:
Django==3.2.6
gunicorn==20.1.0
uvicorn==0.20.0
docker CMD: gunicorn --bind 0.0.0.0:9999 --workers 1 --threads 8 --timeout 0 erp.asgi:application -k uvicorn.workers.UvicornWorker
The server works fine
need to make sure if solution
uvicorn --lifespan off has any side-effects
To close this warning:
add a custom worker with lifespan off
from uvicorn.workers import UvicornWorker
class MyUvicornWorker(UvicornWorker):
CONFIG_KWARGS = {"lifespan": "off"}
user custom worker
gunicorn --bind 0.0.0.0:8888 --workers 1 --threads 8 --timeout 0 erp.asgi:application -k proj.uvicorn_worker.MyUvicornWorker
Tested on my Django 3.2.6, Turning the Lifespan protocol implementation off works
Untill Django 4.2.x django.core.asgi only handle http
# FIXME: Allow to override this.
if scope["type"] != "http":
raise ValueError(
"Django can only handle ASGI/HTTP connections, not %s." % scope["type"]
)

Can't access the fastapi page using the public ipv4 address of the deployed aws ec2 instance with uvicorn running service

I was testing a simple fastapi backend by deploying it on aws ec2 instance. The service runs fine in the default port 8000 in the local machine. But as I ran the script on the ec2 instance with
uvicorn main:app --reload it ran just fine with following return
INFO: Will watch for changes in these directories: ['file/path']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [4698] using StatReload
INFO: Started server process [4700]
INFO: Waiting for application startup.
INFO: Application startup complete.
Then in the ec2 security group configuration, the TCP for 8000 port was allowed as shown in the below image.
ec2 security group port detail
Then to test and access the service I opened the public ipv4 address with port address as https://ec2-public-ipv4-ip:8000/ in chrome.
But there is no response whatsoever.
The webpage is as below
result webpage
The error in the console is as below
VM697:6747 crbug/1173575, non-JS module files deprecated.
(anonymous) # VM697:6747
The fastapi main file contains :->
from fastapi import FastAPI, Form, Depends
from fastapi.middleware.cors import CORSMiddleware
from fastapi.encoders import jsonable_encoder
import joblib
import numpy as np
import os
from own.preprocess import Preprocess
import sklearn
col_data = joblib.load("col_bool_mod.z")
app = FastAPI()
#app.get("/predict")
async def test():
return jsonable_encoder(col_data)
#app.post("/predict")
async def provide(data: list):
print(data)
output = main(data)
return output
def predict_main(df):
num_folds = len(os.listdir("./models/"))
result_li = []
for fold in range(num_folds):
print(f"predicting for fold {fold} / {num_folds}")
model = joblib.load(f"./models/tabnet/{fold}_tabnet_reg_adam/{fold}_model.z")
result = model.predict(df)
print(result)
result_li.append(result)
return np.mean(result_li)
def main(data):
df = Preprocess(data)
res = predict_main(df)
print(res)
return {"value": f"{np.float64(res).item():.3f}" if res >=0 else f"{np.float64(0).item()}"}
The service runs fine with same steps in the react js frontend using port 3000 but the fastapi on 8000 is somehow not working.
Thank You for Your Patience
I wanted to retrieve the basic api reponses from the fastapi-uvicorn server deployed in an aws ec2 instance. But there is no response with 8000 port open and ec2 access on local ipv4 ip address.
One way the problem is fixed is by assigning public ipv4 address followed by port 3000 in the CORS origin. But the issue is to hide the get request data on the browser that is accessed by 8000 port.

Gevent WSGI server doesn't handle request asynchronously even after monkey patch

My setup is simple. I load my server like this python gevent_wsgi_server.py.
When I execute the /block endpoint multiple times, request is rendered sequentially in my browser. 5 seconds each.
I am doing the monkey patch in the server before anything. I think it has to do something with the time module which I imported inside my app.py. But then again, wondering why patching didn't work.
This is my Application
# app.py
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello():
return 'Hello, World! I am a flask app'
#app.route('/block')
def blocking_view():
import time; time.sleep(5)
return 'BLOCKING !'
This is the gevent WSGI server
# gevent_wsgi_server.py
from gevent import monkey; monkey.patch_all()
from gevent.pywsgi import WSGIServer
from app import app
if __name__ == "__main__":
wsgi_server = WSGIServer(("0.0.0.0", 8000), app)
wsgi_server.serve_forever()

Apache airflow celery worker server running in dev mode on production build

I have created a production docker image using breeze command line tool provided. However when I run the airflow worker command, I get the following message on the command line.
Breeze command:
./breeze build-image --production-image --python 3.7 --additional-extras=jdbc --additional-python-deps="pandas pymysql" --additional-runtime-apt-deps="default-jre-headless"
Can anyone help on how to move the worker out of development server?
airflow-worker_1 | Starting flask
airflow-worker_1 | * Serving Flask app "airflow.utils.serve_logs" (lazy loading)
airflow-worker_1 | * Environment: production
airflow-worker_1 | WARNING: This is a development server. Do not use it in a production deployment.
airflow-worker_1 | Use a production WSGI server instead.
airflow-worker_1 | * Debug mode: off
airflow-worker_1 | [2021-02-08 21:57:58,409] {_internal.py:113} INFO - * Running on http://0.0.0.0:8793/ (Press CTRL+C to quit)
here is a discussion by airflow maintainer in github: https://github.com/apache/airflow/discussions/18519
It's harmless. It's an internal server run by the executor to share logs with the webserver. It's been already corrected in main to use 'production` setup (thought it's not REALLY) needed i this case as the log "traffic" and characteristics is not production-webserver like.
The fix will be released in Airflow 2.2 (~ month from now) .

Pytorch model prediction in production with uwsgi

I have a problem deploying a pytorch model in production. For a demonstration, I build a simple model and a flask app. I put everything in a docker container (pytorch+flask+uwsgi) plus another container for nginx. Everything is running well, my app is rendered and I can navigate inside. However, well I navigate into the URL that launches a prediction of the model, the server hangs and does not seem to compute anything.
The uWSGI is run like this:
/opt/conda/bin/uwsgi --ini /usr/src/web/uwsgi.ini
with uwsgi.ini
[uwsgi]
#application's base folder
chdir = /usr/src/web/
#python module to import
wsgi-file = /usr/src/web/wsgi.py
callable = app
#socket file's location
socket = /usr/src/web/uwsgi.sock
#permissions for the socket file
chmod-socket = 666
# Port to expose
http = :5000
# Cleanup the socket when process stops
vacuum = true
#Log directory
logto = /usr/src/web/app.log
# minimum number of workers to keep at all times
cheaper = 2
processes = 16
As said, the server hangs and I finally got a timeout. What is strange is when I run the flask application directly (also in the container) with
python /usr/src/web/manage.py runserver --host 0.0.0.0
I get my prediction in no time
I think this is related to
https://discuss.pytorch.org/t/basic-operations-do-not-work-in-1-1-0-with-uwsgi-flask/50257
Maybe try as mentioned there:
app = flask.Flask(__name__)
segmentator = None
#app.before_first_request
def load_segmentator():
global segmentator
segmentator = Segmentator()
where Segmentator is a class with pytorch’s nn.Module, which loads weights in __init__
FYI this solution worked for me with one app but not the other

Resources