Julia WebSocket slow to read/write - http

I was testing Julia WebSockets and I found that they are much slower than the Python/Node equivalents. I wrote a test to send a ping and measure the time taken for a pong response from some crypto exchanges as an example. Julia consistently takes 50ms for the test below, whereas python takes 2-4 ms. Is this expected?
Julia:
using WebSockets
function main()
WebSockets.open("wss://wsaws.okex.com:8443/ws/v5/public") do ws
for i = 1:100
a = time_ns()
write(ws, "ping")
data, success = readguarded(ws)
b = time_ns()
println(String(data), " ", (b - a) / 1000000)
sleep(0.1)
end
end
end
main()
Equivalent Python:
import websockets
import asyncio
import time
async def main():
uri = "wss://wsaws.okex.com:8443/ws/v5/public"
async with websockets.connect(uri) as websocket:
msg = "hello"
for i in range(101):
a = time.time()
await websocket.send(msg)
x = await websocket.recv()
b = time.time()
print(b-a)
time.sleep(0.1)
asyncio.get_event_loop().run_until_complete(main())

Related

When I upload sound file and convert sampling rate with librosa, memory usage is too big

I am developing a web using flask + nginx + gunicorn to convert sampling rate on file upload.
When I upload and convert an audio file(mp3 or wav) of about 40MB, I could see a sharp increase in RAM usage.
Although I use aws lightsail(memory 2GB), instantaneous memory usage is over
Log are
At nginx error.log, upstream prematurely closed connection while reading response header from upstream
At sys.log, OOM and memory killed
I'm using virtual memory to make one file work somehow, but I want to make it simultaneously used by several people, so I'm trying to solve the memory problem.
Question
Why does the upload consume much more memory than the size of the file?
What is the solution?
code is using librosa.
Function memory_usage was added to see memory usage history.
[#1] memory usage: 153.62500 MB
[#2] memory usage: 488.25391 MB
[#3] memory usage: 219.86719 MB
[#4] memory usage: 220.30078 MB
'''
'
import psutil
from flask import Flask, render_template, request, send_file
import librosa, soundfile
import librosa.display
from werkzeug.utils import secure_filename
from datetime import datetime
import os
import io
from pydub import AudioSegment
#memory
def memory_usage(message: str = 'debug'):
# current process RAM usage
p = psutil.Process()
rss = p.memory_info().rss / 2 ** 20 # Bytes to MB
print(f"[{message}] memory usage: {rss: 10.5f} MB")
file = open(f"C:\\Users\\kimkunyu\\Desktop\\mmmmm.txt","a")
file.write(f"[{message}] memory usage: {rss: 10.5f} MB")
file.close()
def create_app():
app = Flask(__name__)
#app.route('/')
def hello_pybo():
return render_template('main.html')
#app.route('/upload', methods=['POST', 'GET'])
def upload():
if request.method == 'POST':
audio_data = request.files["lc"]
memory_usage('#1')
if audio_data:
filename = secure_filename(audio_data.filename)
src = f"pybo/sound/{filename}"
audio_data.save(src)
stem, fileExtension = os.path.splitext(filename)
if fileExtension == '.mp3':
audSeg = AudioSegment.from_mp3(src)
##
os.remove(src)
src = f"pybo/sound/{stem}" + '.wav'
audSeg.export(src, format="wav")
sr = librosa.get_samplerate(src)
memory_usage('#2')
audio_data, sr = librosa.load(src, sr=sr // 6)
# print(sr)
# print(audio_data.shape)
# print(audio_data.dtype)
# print(len(audio_data))
memory_usage('#3')
new_filename = f'{filename.split(".")[0]}_{str(datetime.now())}.wav'
new_filename = new_filename.replace(':', "_")
os.remove(src)
save_location = f'pybo/output/{new_filename}'
soundfile.write(save_location, audio_data, sr, format='WAV')
memory_usage('#4')
return render_template("download.html", save_location=save_location)
#app.route('/download', methods=['POST', 'GET'])
def download():
save_location = request.form["save_location"]
with open(save_location, 'rb') as fo:
return_data = io.BytesIO()
memory_usage('#5')
return_data.write(fo.read())
memory_usage('#6')
# (after writing, cursor will be at last byte, so move it to start)
return_data.seek(0)
os.remove(save_location)
return send_file(return_data, as_attachment=True, download_name='converted_file.wav')
return app`

Active BLE Scanning (BlueZ) - Issue with DBus

I've started a project where I need to actively (all the time) scan for BLE Devices. I'm on Linux, using Bluez 5.49 and I use Python to communicate with dbus 1.10.20).
I' m able to start scanning, stop scanning with bluetoothctl and get the BLE Advertisement data through DBus (GetManagedObjects() of the BlueZ interface). The problem I have is when I let the scanning for many hours, dbus-deamon start to take more and more of the RAM and I'm not able to find how to "flush" what dbus has gathered from BlueZ. Eventually the RAM become full and Linux isn't happy.
So I've tried not to scan for the entire time, that would maybe let the Garbage collector do its cleanup. It didn't work.
I've edited the /etc/dbus-1/system.d/bluetooth.conf to remove any interface that I didn't need
<policy user="root">
<allow own="org.bluez"/>
<allow send_destination="org.bluez"/>
</policy>
That has slow down the RAM build-up but didn't solve the issue.
I've found a way to inspect which connection has byte waiting and confirmed that it comes from blueZ
Connection :1.74 with pid 3622 '/usr/libexec/bluetooth/bluetoothd --experimental ' (org.bluez):
IncomingBytes=1253544
PeakIncomingBytes=1313072
OutgoingBytes=0
PeakOutgoingBytes=210
and lastly, I've found that someone needs to read what is waiting in DBus in order to free the memory. So I've found this : https://stackoverflow.com/a/60665430/15325057
And I receive the data that BlueZ is sending over but the memory still built-up.
The only way I know to free up dbus is to reboot linux. which is not ideal.
I'm coming at the end of what I understand of DBus and that's why I'm here today.
If you have any insight that could help me to free dbus from BlueZ messages, it would be highly appreciated.
Thanks in advance
EDIT Adding the DBus code i use to read the discovered devices:
#!/usr/bin/python3
import dbus
BLUEZ_SERVICE_NAME = "org.bluez"
DBUS_OM_IFACE = "org.freedesktop.DBus.ObjectManager"
DEVICES_IFACE = "org.bluez.Device1"
def main_loop(subproc):
devinfo = None
objects = None
dbussys = dbus.SystemBus()
dbusconnection = dbussys.get_object(BLUEZ_SERVICE_NAME, "/")
bluezInterface = dbus.Interface(dbusconnection, DBUS_OM_IFACE)
while True:
try:
objects = bluezInterface.GetManagedObjects()
except dbus.DBusException as err:
print("dbus Error : " + str(err))
pass
all_devices = (str(path) for path, interfaces in objects.items() if DEVICES_IFACE in interfaces.keys())
for path, interfaces in objects.items():
if "org.bluez.Adapter1" not in interfaces.keys():
continue
device_list = [d for d in all_devices if d.startswith(path + "/")]
for dev_path in device_list:
properties = objects[dev_path][DEVICES_IFACE]
if "ServiceData" in properties.keys() and "Name" in properties.keys() and "RSSI" in properties.keys():
#[... Do someting...]
Indeed, Bluez flushes memory when you stop discovering. So in order to scan continuously you need start and stop the discovery all the time. I discover for 6 seconds, wait 1 second and then start discovering for 6 seconds again...and so on. If you check the logs you will see it deletes a lot of stuff when stopping discovery.
I can't really reproduce your error exactly but my system is not happy running that fast while loop repeatedly getting the data from GetManagedObjects.
Below is the code I ran based on your code with a little bit of refactoring...
import dbus
BLUEZ_SERVICE_NAME = "org.bluez"
DBUS_OM_IFACE = "org.freedesktop.DBus.ObjectManager"
ADAPTER_IFACE = "org.bluez.Adapter1"
DEVICES_IFACE = "org.bluez.Device1"
def main_loop():
devinfo = None
objects = None
dbussys = dbus.SystemBus()
dbusconnection = dbussys.get_object(BLUEZ_SERVICE_NAME, "/")
bluezInterface = dbus.Interface(dbusconnection, DBUS_OM_IFACE)
while True:
objects = bluezInterface.GetManagedObjects()
for path in objects:
name = objects[path].get(DEVICES_IFACE, {}).get('Name')
rssi = objects[path].get(DEVICES_IFACE, {}).get('RSSI')
service_data = objects[path].get(DEVICES_IFACE, {}).get('ServiceData')
if all((name, rssi, service_data)):
print(f'{name} # {rssi} = {service_data}')
#[... Do someting...]
if __name__ == '__main__':
main_loop()
I'm not sure what you are trying to do in the broader project but if I can make some recommendations...
A more typical way of scanning for service/manufacturer data is to subscribe to signals in D-Bus that trigger callbacks when something of interest happens.
Below is some code I use to look for iBeacons and Eddystone beacons. This runs using the GLib event loop which is maybe something you have ruled out but is more efficient on resources.
It does use different Python dbus bindings as I find pydbus more "pythonic".
I have left the code in processing the beacons as it might be a useful reference.
import argparse
from gi.repository import GLib
from pydbus import SystemBus
import uuid
DEVICE_INTERFACE = 'org.bluez.Device1'
remove_list = set()
def stop_scan():
"""Stop device discovery and quit event loop"""
adapter.StopDiscovery()
mainloop.quit()
def clean_beacons():
"""
BlueZ D-Bus API does not show duplicates. This is a
workaround that removes devices that have been found
during discovery
"""
not_found = set()
for rm_dev in remove_list:
try:
adapter.RemoveDevice(rm_dev)
except GLib.Error as err:
not_found.add(rm_dev)
for lost in not_found:
remove_list.remove(lost)
def process_eddystone(data):
"""Print Eddystone data in human readable format"""
_url_prefix_scheme = ['http://www.', 'https://www.',
'http://', 'https://', ]
_url_encoding = ['.com/', '.org/', '.edu/', '.net/', '.info/',
'.biz/', '.gov/', '.com', '.org', '.edu',
'.net', '.info', '.biz', '.gov']
tx_pwr = int.from_bytes([data[1]], 'big', signed=True)
# Eddystone UID Beacon format
if data[0] == 0x00:
namespace_id = int.from_bytes(data[2:12], 'big')
instance_id = int.from_bytes(data[12:18], 'big')
print(f'\t\tEddystone UID: {namespace_id} - {instance_id} \u2197 {tx_pwr}')
# Eddystone URL beacon format
elif data[0] == 0x10:
prefix = data[2]
encoded_url = data[3:]
full_url = _url_prefix_scheme[prefix]
for letter in encoded_url:
if letter < len(_url_encoding):
full_url += _url_encoding[letter]
else:
full_url += chr(letter)
print(f'\t\tEddystone URL: {full_url} \u2197 {tx_pwr}')
def process_ibeacon(data, beacon_type='iBeacon'):
"""Print iBeacon data in human readable format"""
print('DATA:', data)
beacon_uuid = uuid.UUID(bytes=bytes(data[2:18]))
major = int.from_bytes(bytearray(data[18:20]), 'big', signed=False)
minor = int.from_bytes(bytearray(data[20:22]), 'big', signed=False)
tx_pwr = int.from_bytes([data[22]], 'big', signed=True)
print(f'\t\t{beacon_type}: {beacon_uuid} - {major} - {minor} \u2197 {tx_pwr}')
def ble_16bit_match(uuid_16, srv_data):
"""Expand 16 bit UUID to full 128 bit UUID"""
uuid_128 = f'0000{uuid_16}-0000-1000-8000-00805f9b34fb'
return uuid_128 == list(srv_data.keys())[0]
def on_iface_added(owner, path, iface, signal, interfaces_and_properties):
"""
Event handler for D-Bus interface added.
Test to see if it is a new Bluetooth device
"""
iface_path, iface_props = interfaces_and_properties
if DEVICE_INTERFACE in iface_props:
on_device_found(iface_path, iface_props[DEVICE_INTERFACE])
def on_device_found(device_path, device_props):
"""
Handle new Bluetooth device being discover.
If it is a beacon of type iBeacon, Eddystone, AltBeacon
then process it
"""
address = device_props.get('Address')
address_type = device_props.get('AddressType')
name = device_props.get('Name')
alias = device_props.get('Alias')
paired = device_props.get('Paired')
trusted = device_props.get('Trusted')
rssi = device_props.get('RSSI')
service_data = device_props.get('ServiceData')
manufacturer_data = device_props.get('ManufacturerData')
if address.casefold() == '00:c3:f4:f1:58:69':
print('Found mac address of interest')
if service_data and ble_16bit_match('feaa', service_data):
process_eddystone(service_data['0000feaa-0000-1000-8000-00805f9b34fb'])
remove_list.add(device_path)
elif manufacturer_data:
for mfg_id in manufacturer_data:
# iBeacon 0x004c
if mfg_id == 0x004c and manufacturer_data[mfg_id][0] == 0x02:
process_ibeacon(manufacturer_data[mfg_id])
remove_list.add(device_path)
# AltBeacon 0xacbe
elif mfg_id == 0xffff and manufacturer_data[mfg_id][0:2] == [0xbe, 0xac]:
process_ibeacon(manufacturer_data[mfg_id], beacon_type='AltBeacon')
remove_list.add(device_path)
clean_beacons()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('-d', '--duration', type=int, default=0,
help='Duration of scan [0 for continuous]')
args = parser.parse_args()
bus = SystemBus()
adapter = bus.get('org.bluez', '/org/bluez/hci0')
bus.subscribe(iface='org.freedesktop.DBus.ObjectManager',
signal='InterfacesAdded',
signal_fired=on_iface_added)
mainloop = GLib.MainLoop()
if args.duration > 0:
GLib.timeout_add_seconds(args.duration, stop_scan)
adapter.SetDiscoveryFilter({'DuplicateData': GLib.Variant.new_boolean(False)})
adapter.StartDiscovery()
try:
print('\n\tUse CTRL-C to stop discovery\n')
mainloop.run()
except KeyboardInterrupt:
stop_scan()

Python: Run only one function async

I have a large legacy application that has one function that is a prime candidate to be executed async. It's IO bound (network and disk) and doesn't return anything.
This is a very simple similar implementation:
import random
import time
import requests
def fetch_urls(site):
wait = random.randint(0, 5)
filename = site.split("/")[2].replace(".", "_")
print(f"Will fetch {site} in {wait} seconds")
time.sleep(wait)
r = requests.get(site)
with open(filename, "w") as fd:
fd.write(r.text)
def something(sites):
for site in sites:
fetch_urls(site)
return True
def main():
sites = ["https://www.google.com", "https://www.reddit.com", "https://www.msn.com"]
start = time.perf_counter()
something(sites)
total_time = time.perf_counter() - start
print(f"Finished in {total_time}")
if __name__ == "__main__":
main()
My end goal would be updating the something function to run fetch_urls async.
I cannot change fetch_urls.
All documentation and tutorials I can find assumes my entire application is async (starting from async def main()) but this is not the case.
It's a huge application spanning across multiple modules and re-factoring everything for a single function doesn't look right.
For what I understand I will need to create a loop, add tasks to it and dispatch it somehow, but I tried many different things and I still get everything running just one after another - as oppose to concurrently.
I would appreciate any assistance. Thanks!
Replying to myself, it seems there is no easy way to do that with async. Ended up using concurrent.futures
import time
import requests
import concurrent.futures
def fetch_urls(url, name):
wait = 5
filename = url.split("/")[2].replace(".", "_")
print(f"Will fetch {name} in {wait} seconds")
time.sleep(wait)
r = requests.get(url)
with open(filename, "w") as fd:
fd.write(r.text)
def something(sites):
with concurrent.futures.ProcessPoolExecutor(max_workers=5) as executor:
future_to_url = {
executor.submit(fetch_urls, url["url"], url["name"]): (url)
for url in sites["children"]
}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as exc:
print("%r generated an exception: %s" % (url, exc))
return True
def main():
sites = {
"parent": "https://stackoverflow.com",
"children": [
{"name": "google", "url": "https://google.com"},
{"name": "reddit", "url": "https://reddit.com"},
],
}
start = time.perf_counter()
something(sites)
total_time = time.perf_counter() - start
print(f"Finished in {total_time}")

Writing files asynchronously

I've been trying to create a server-process that receives an input file path and an output path from client processes asynchronously. The server does some database-reliant transformations, but for the sake of simplicity let's assume it merely puts everything to the upper case. Here is a toy example of the server:
import asyncio
import aiofiles as aiof
import logging
import sys
ADDRESS = ("localhost", 10000)
logging.basicConfig(level=logging.DEBUG,
format="%(name)s: %(message)s",
stream=sys.stderr)
log = logging.getLogger("main")
loop = asyncio.get_event_loop()
async def server(reader, writer):
log = logging.getLogger("process at {}:{}".format(*ADDRESS))
paths = await reader.read()
in_fp, out_fp = paths.splitlines()
log.debug("connection accepted")
log.debug("processing file {!r}, writing output to {!r}".format(in_fp, out_fp))
async with aiof.open(in_fp, loop=loop) as inp, aiof.open(out_fp, "w", loop=loop) as out:
async for line in inp:
out.write(line.upper())
out.flush()
writer.write(b"done")
await writer.drain()
log.debug("closing")
writer.close()
return
factory = asyncio.start_server(server, *ADDRESS)
server = loop.run_until_complete(factory)
log.debug("starting up on {} port {}".format(*ADDRESS))
try:
loop.run_forever()
except KeyboardInterrupt:
pass
finally:
log.debug("closing server")
server.close()
loop.run_until_complete(server.wait_closed())
log.debug("closing event loop")
loop.close()
The client:
import asyncio
import logging
import sys
import random
ADDRESS = ("localhost", 10000)
MESSAGES = ["/path/to/a/big/file.txt\n",
"/output/file_{}.txt\n".format(random.randint(0, 99999))]
logging.basicConfig(level=logging.DEBUG,
format="%(name)s: %(message)s",
stream=sys.stderr)
log = logging.getLogger("main")
loop = asyncio.get_event_loop()
async def client(address, messages):
log = logging.getLogger("client")
log.debug("connecting to {} port {}".format(*address))
reader, writer = await asyncio.open_connection(*address)
writer.writelines([bytes(line, "utf8") for line in messages])
if writer.can_write_eof():
writer.write_eof()
await writer.drain()
log.debug("waiting for response")
response = await reader.read()
log.debug("received {!r}".format(response))
writer.close()
return
try:
loop.run_until_complete(client(ADDRESS, MESSAGES))
finally:
log.debug("closing event loop")
loop.close()
I activated the server and several clients at once. The server's logs:
asyncio: Using selector: KqueueSelector
main: starting up on localhost port 10000
process at localhost:10000: connection accepted
process at localhost:10000: processing file b'/path/to/a/big/file.txt', writing output to b'/output/file_79609.txt'
process at localhost:10000: connection accepted
process at localhost:10000: processing file b'/path/to/a/big/file.txt', writing output to b'/output/file_68917.txt'
process at localhost:10000: connection accepted
process at localhost:10000: processing file b'/path/to/a/big/file.txt', writing output to b'/output/file_2439.txt'
process at localhost:10000: closing
process at localhost:10000: closing
process at localhost:10000: closing
All clients print this:
asyncio: Using selector: KqueueSelector
client: connecting to localhost port 10000
client: waiting for response
client: received b'done'
main: closing event loop
The output files are created, but they remain empty. I believe they are not being flushed. Any way I can fix it?
You are missing an await before out.write() and out.flush():
import asyncio
from pathlib import Path
import aiofiles as aiof
FILENAME = "foo.txt"
async def bad():
async with aiof.open(FILENAME, "w") as out:
out.write("hello world")
out.flush()
print("done")
async def good():
async with aiof.open(FILENAME, "w") as out:
await out.write("hello world")
await out.flush()
print("done")
loop = asyncio.get_event_loop()
server = loop.run_until_complete(bad())
print(Path(FILENAME).stat().st_size) # prints 0
server = loop.run_until_complete(good())
print(Path(FILENAME).stat().st_size) # prints 11
However, I would strongly recommend trying to skip aiofiles and use regular, synchronized disk I/O, and keep asyncio for network activity:
with open(file, "w") as out: # regular file I/O
async for s in network_request(): # asyncio for slow network work. measure it!
out.write(s) # should be really quick, measure it!

Minimal example of HTTP server doing asynchronous database queries?

I'm playing with different asynchronous HTTP servers to see how they can handle multiple simultaneous connections. To force a time-consuming I/O operation I use the pg_sleep PostgreSQL function to emulate a time-consuming database query. Here is for instance what I did with Node.js:
var http = require('http');
var pg = require('pg');
var conString = "postgres://al:al#localhost/al";
/* SQL query that takes a long time to complete */
var slowQuery = 'SELECT 42 as number, pg_sleep(0.300);';
var server = http.createServer(function(req, res) {
pg.connect(conString, function(err, client, done) {
client.query(slowQuery, [], function(err, result) {
done();
res.writeHead(200, {'content-type': 'text/plain'});
res.end("Result: " + result.rows[0].number);
});
});
})
console.log("Serve http://127.0.0.1:3001/")
server.listen(3001)
So this a very simple request handler that does an SQL query taking 300ms and returns a response. When I try benchmarking it I get the following results:
$ ab -n 20 -c 10 http://127.0.0.1:3001/
Time taken for tests: 0.678 seconds
Complete requests: 20
Requests per second: 29.49 [#/sec] (mean)
Time per request: 339.116 [ms] (mean)
This shows clearly that requests are executed in parallel. Each request takes 300ms to complete and because we have 2 batches of 10 requests executed in parallel, it takes 600ms overall.
Now I'm trying to do the same with Elixir, since I heard it does asynchronous I/O transparently. Here is my naive approach:
defmodule Toto do
import Plug.Conn
def init(options) do
{:ok, pid} = Postgrex.Connection.start_link(
username: "al", password: "al", database: "al")
options ++ [pid: pid]
end
def call(conn, opts) do
sql = "SELECT 42, pg_sleep(0.300);"
result = Postgrex.Connection.query!(opts[:pid], sql, [])
[{value, _}] = result.rows
conn
|> put_resp_content_type("text/plain")
|> send_resp(200, "Result: #{value}")
end
end
In case that might relevant, here is my supervisor:
defmodule Toto.Supervisor do
use Application
def start(type, args) do
import Supervisor.Spec, warn: false
children = [
worker(Plug.Adapters.Cowboy, [Toto, []], function: :http),
]
opts = [strategy: :one_for_one, name: Toto.Supervisor]
Supervisor.start_link(children, opts)
end
end
As you might expect, this doesn't give me the expected result:
$ ab -n 20 -c 10 http://127.0.0.1:4000/
Time taken for tests: 6.056 seconds
Requests per second: 3.30 [#/sec] (mean)
Time per request: 3028.038 [ms] (mean)
It looks like there's no parallelism, requests are handled one after the other. What am I doing wrong?
Elixir should be completely fine with this setup. The difference is that your node.js code is creating a connection to the database for every request. However, in your Elixir code, init is called once (and not per request!) so you end-up with a single process that sends queries to Postgres for all requests, which then becomes your bottleneck.
The easiest solution would be to move the connection to Postgres out of init and into call. However, I would advise you to use Ecto which will set up a connection pool to the database too. You can also play with the pool configuration for optimal results.
UPDATE This was just test code, if you want to do something like this see #AlexMarandon's Ecto pool answer instead.
I've just been playing with moving the connection setup as José suggested:
defmodule Toto do
import Plug.Conn
def init(options) do
options
end
def call(conn, opts) do
{ :ok, pid } = Postgrex.Connection.start_link(username: "chris", password: "", database: "ecto_test")
sql = "SELECT 42, pg_sleep(0.300);"
result = Postgrex.Connection.query!(pid, sql, [])
[{value, _}] = result.rows
conn
|> put_resp_content_type("text/plain")
|> send_resp(200, "Result: #{value}")
end
end
With results:
% ab -n 20 -c 10 http://127.0.0.1:4000/
Time taken for tests: 0.832 seconds
Requests per second: 24.05 [#/sec] (mean)
Time per request: 415.818 [ms] (mean)
Here is the code I came up with following José's answer:
defmodule Toto do
import Plug.Conn
def init(options) do
options
end
def call(conn, _opts) do
sql = "SELECT 42, pg_sleep(0.300);"
result = Ecto.Adapters.SQL.query(Repo, sql, [])
[{value, _}] = result.rows
conn
|> put_resp_content_type("text/plain")
|> send_resp(200, "Result: #{value}")
end
end
For this to work we need to declare a repo module:
defmodule Repo do
use Ecto.Repo, otp_app: :toto
end
And start that repo in the supervisor:
defmodule Toto.Supervisor do
use Application
def start(type, args) do
import Supervisor.Spec, warn: false
children = [
worker(Plug.Adapters.Cowboy, [Toto, []], function: :http),
worker(Repo, [])
]
opts = [strategy: :one_for_one, name: Toto.Supervisor]
Supervisor.start_link(children, opts)
end
end
As José mentioned, I got the best performance by tweaking the configuration a bit:
config :toto, Repo,
adapter: Ecto.Adapters.Postgres,
database: "al",
username: "al",
password: "al",
size: 10,
lazy: false
Here is the result of my benchmark (after a few runs so that the pool has the time to "warm up") with default configuration:
$ ab -n 20 -c 10 http://127.0.0.1:4000/
Time taken for tests: 0.874 seconds
Requests per second: 22.89 [#/sec] (mean)
Time per request: 436.890 [ms] (mean)
And here is the result with size: 10 and lazy: false:
$ ab -n 20 -c 10 http://127.0.0.1:4000/
Time taken for tests: 0.619 seconds
Requests per second: 32.30 [#/sec] (mean)
Time per request: 309.564 [ms] (mean)

Resources