Performance of Deno [closed] - deno

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I've been using Deno a little bit. For some reason, I've decided to compare the performance between Deno and Node. The performance of Deno seems to be a popular topic these days, but people are almost always discussing the HTTP throughput. What I'm interested is to see how Deno compares to Node when it comes to basic JS functionalities such as for loops, Array.map, JSON.stringify, ...
I'm comparing Deno 1.5.2 (v8 8.7.220.3) with Node 15.2.0 (v8 8.6.395.17). Since JS is interpreted by a very similar version of v8 in both cases, I would expect the code to run pretty much as fast in both cases.
I prepared a .js file to run 10 functions 1 million times on both runtimes. I used Date.now() to measure the duration in milliseconds because it's common between both runtimes.
const list = [1, 2, 3, 4, 5, 6, 7, 8, 9];
const obj1 = {foo: 1};
const obj2 = {bar: 1};
const json = JSON.stringify(obj1);
// Declare the performance tool (different between Node and Deno)
let perf;
try {
// Use perf_hooks in Node
perf = require("perf_hooks").performance;
} catch {
// Use performance in Deno
perf = performance
}
// Declare the test functions
function mapFn() {return list.map(num => num*10);}
function reduceFn() {return list.reduce((a, b) => a+b);}
function forEachFn() {return list.forEach(num => num*10);}
function sliceFn() {return list.slice(0, 5);}
function forFn() {for (let i = 0; i < list.length; i++) {list[i]*10};}
function objectAssignFn() {return Object.assign({}, obj1, obj2)};
function jsonStringifyFn() {return JSON.stringify(obj1)};
function jsonParseFn() {return JSON.parse(json)};
function markFn() {perf.mark()};
function nowFn() {perf.now()};
// Loop through each test function and measure the time it takes
// to execute 1 million iterations.
[
mapFn,
reduceFn,
forEachFn,
sliceFn,
forFn,
objectAssignFn,
jsonStringifyFn,
jsonParseFn,
markFn,
nowFn,
]
.forEach(fn => {
let start = Date.now();
for (let i = 0; i < 1000000; i++) {
fn();
}
let end = Date.now();
console.log(`${fn.name} - Time elapsed: ${end-start} ms`);
});
Here are the results:
Results - Deno
// deno run app.js
mapFn - Time elapsed: 171 ms
reduceFn - Time elapsed: 26 ms
forEachFn - Time elapsed: 17 ms
sliceFn - Time elapsed: 54 ms
forFn - Time elapsed: 17 ms
objectAssignFn - Time elapsed: 95 ms
jsonStringify - Time elapsed: 287 ms
jsonParse - Time elapsed: 472 ms
markFn - Time elapsed: 11905 ms
nowFn - Time elapsed: 10988 ms
Results - Node
// node app.js
mapFn - Time elapsed: 64 ms
reduceFn - Time elapsed: 27 ms
forEachFn - Time elapsed: 17 ms
sliceFn - Time elapsed: 76 ms
forFn - Time elapsed: 17 ms
objectAssignFn - Time elapsed: 116 ms
jsonStringify - Time elapsed: 283 ms
jsonParse - Time elapsed: 400 ms
markFn - Time elapsed: 2459 ms
nowFn - Time elapsed: 95 ms
Both runtimes give very similar results, +/- 20%.
However, 3 results are quite surprising:
Array.map is almost 3x slower in Deno. I'd expect a very similar performance for Array.map since it's built in V8. Why so much difference when Array.forEach and Array.reduce perform in a similar way on both runtimes?
performance.now() in Deno is approx. 110x slower in Deno. I understand that the performance class is primarily here to benchmark our code, so how can it be so slow? Is it still considered as an unstable feature?
performance.mark() in Deno is approx. 5x slower in Deno. Why is this the case?
Thanks!

Related

How to estimate current cost for logging?

Brand new to DataDog. I'm disappointed that my "usage" console doesn't give any indication of how much money I'm spending. So, I'm trying to create a dashboard. Right now I'm trying to simply show how much we are paying for logs this month.
I have the "sum of logs in bytes" (I think) but I'm having trouble converting that to $. This is due to my weakness in math as well as my lack of understanding of the DataDog interface. Below is my current effort. I'm dividing by 1024 three times to convert GB, then dividing by 10 (because you can't multiply by .10) to adjust for the 10 cents per gigabyte and hopefully end up with price per byte. The result is 2.05e-3 and I obviously have zero confidence that this is right.
{
"viz": "query_value",
"requests": [
{
"formulas": [
{
"formula": "(query1 / 1024 / 1024 / 1024) / 10"
}
],
"response_format": "scalar",
"queries": [
{
"data_source": "metrics",
"name": "query1",
"query": "sum:datadog.estimated_usage.logs.ingested_bytes{*}.as_count()",
"aggregator": "sum"
}
]
}
],
"autoscale": true,
"precision": 2,
"timeseries_background": {
"type": "bars"
}
}
So I did some simple math in my head.
if 1GB = 10 cents then 100MB = 1 cent and 10MB = .1 cent.
So the 21MB of logs I have should be costing .21 cents.
Working backwards from the answer, I came up with this formula:
"clamp_min((query1 / 1024 / 1024 / 1024) * 10, 0.1)"
Which results in something that looks right to me:
Note I also used clamp_min to keep it from showing scientific notation when we are very low at the beginning of the month.

check the available slots/resources before spawning MPI processes

I need ,before spawning a number of workers processes as shown below, to check if this number is available so that the below code will not crash if the requested slots is not available.
int numworkers = settings.Parallelism + 1; //omp_get_num_procs();
MPI_Comm_spawn("./processes/montecarlo", MPI_ARGV_NULL, numworkers,
MPI_INFO_NULL,
0, MPI_COMM_SELF, &workercomm, MPI_ERRCODES_IGNORE);
How to check available slots for mpi?
This is happening in the context of a service accepting several requests:
let us suppose:Total available slots: 13
REQ1: spawn 5 processes
Req2: spawn another 5 proc
Req3: will try to spawn 5 proc , but will crash because only 3 is available. how to check that only 3 is available?
or otherwise how to handle the crash that results from the non availability of resources. this crash is killing the service.
you can simply ask MPI_Comm_spawn() to return with an error code instead of aborting the application.
MPI_Comm_set_errhandler(MPI_COMM_SELF, MPI_ERRORS_RETURN);
int res = MPI_Comm_spawn("./processes/montecarlo", MPI_ARGV_NULL, numworkers,
MPI_INFO_NULL, 0, MPI_COMM_SELF, &workercomm, MPI_ERRCODES_IGNORE);
if (MPI_SUCCESS != res) {
// MPI_Comm_spawn failed
}

Minimal example of HTTP server doing asynchronous database queries?

I'm playing with different asynchronous HTTP servers to see how they can handle multiple simultaneous connections. To force a time-consuming I/O operation I use the pg_sleep PostgreSQL function to emulate a time-consuming database query. Here is for instance what I did with Node.js:
var http = require('http');
var pg = require('pg');
var conString = "postgres://al:al#localhost/al";
/* SQL query that takes a long time to complete */
var slowQuery = 'SELECT 42 as number, pg_sleep(0.300);';
var server = http.createServer(function(req, res) {
pg.connect(conString, function(err, client, done) {
client.query(slowQuery, [], function(err, result) {
done();
res.writeHead(200, {'content-type': 'text/plain'});
res.end("Result: " + result.rows[0].number);
});
});
})
console.log("Serve http://127.0.0.1:3001/")
server.listen(3001)
So this a very simple request handler that does an SQL query taking 300ms and returns a response. When I try benchmarking it I get the following results:
$ ab -n 20 -c 10 http://127.0.0.1:3001/
Time taken for tests: 0.678 seconds
Complete requests: 20
Requests per second: 29.49 [#/sec] (mean)
Time per request: 339.116 [ms] (mean)
This shows clearly that requests are executed in parallel. Each request takes 300ms to complete and because we have 2 batches of 10 requests executed in parallel, it takes 600ms overall.
Now I'm trying to do the same with Elixir, since I heard it does asynchronous I/O transparently. Here is my naive approach:
defmodule Toto do
import Plug.Conn
def init(options) do
{:ok, pid} = Postgrex.Connection.start_link(
username: "al", password: "al", database: "al")
options ++ [pid: pid]
end
def call(conn, opts) do
sql = "SELECT 42, pg_sleep(0.300);"
result = Postgrex.Connection.query!(opts[:pid], sql, [])
[{value, _}] = result.rows
conn
|> put_resp_content_type("text/plain")
|> send_resp(200, "Result: #{value}")
end
end
In case that might relevant, here is my supervisor:
defmodule Toto.Supervisor do
use Application
def start(type, args) do
import Supervisor.Spec, warn: false
children = [
worker(Plug.Adapters.Cowboy, [Toto, []], function: :http),
]
opts = [strategy: :one_for_one, name: Toto.Supervisor]
Supervisor.start_link(children, opts)
end
end
As you might expect, this doesn't give me the expected result:
$ ab -n 20 -c 10 http://127.0.0.1:4000/
Time taken for tests: 6.056 seconds
Requests per second: 3.30 [#/sec] (mean)
Time per request: 3028.038 [ms] (mean)
It looks like there's no parallelism, requests are handled one after the other. What am I doing wrong?
Elixir should be completely fine with this setup. The difference is that your node.js code is creating a connection to the database for every request. However, in your Elixir code, init is called once (and not per request!) so you end-up with a single process that sends queries to Postgres for all requests, which then becomes your bottleneck.
The easiest solution would be to move the connection to Postgres out of init and into call. However, I would advise you to use Ecto which will set up a connection pool to the database too. You can also play with the pool configuration for optimal results.
UPDATE This was just test code, if you want to do something like this see #AlexMarandon's Ecto pool answer instead.
I've just been playing with moving the connection setup as José suggested:
defmodule Toto do
import Plug.Conn
def init(options) do
options
end
def call(conn, opts) do
{ :ok, pid } = Postgrex.Connection.start_link(username: "chris", password: "", database: "ecto_test")
sql = "SELECT 42, pg_sleep(0.300);"
result = Postgrex.Connection.query!(pid, sql, [])
[{value, _}] = result.rows
conn
|> put_resp_content_type("text/plain")
|> send_resp(200, "Result: #{value}")
end
end
With results:
% ab -n 20 -c 10 http://127.0.0.1:4000/
Time taken for tests: 0.832 seconds
Requests per second: 24.05 [#/sec] (mean)
Time per request: 415.818 [ms] (mean)
Here is the code I came up with following José's answer:
defmodule Toto do
import Plug.Conn
def init(options) do
options
end
def call(conn, _opts) do
sql = "SELECT 42, pg_sleep(0.300);"
result = Ecto.Adapters.SQL.query(Repo, sql, [])
[{value, _}] = result.rows
conn
|> put_resp_content_type("text/plain")
|> send_resp(200, "Result: #{value}")
end
end
For this to work we need to declare a repo module:
defmodule Repo do
use Ecto.Repo, otp_app: :toto
end
And start that repo in the supervisor:
defmodule Toto.Supervisor do
use Application
def start(type, args) do
import Supervisor.Spec, warn: false
children = [
worker(Plug.Adapters.Cowboy, [Toto, []], function: :http),
worker(Repo, [])
]
opts = [strategy: :one_for_one, name: Toto.Supervisor]
Supervisor.start_link(children, opts)
end
end
As José mentioned, I got the best performance by tweaking the configuration a bit:
config :toto, Repo,
adapter: Ecto.Adapters.Postgres,
database: "al",
username: "al",
password: "al",
size: 10,
lazy: false
Here is the result of my benchmark (after a few runs so that the pool has the time to "warm up") with default configuration:
$ ab -n 20 -c 10 http://127.0.0.1:4000/
Time taken for tests: 0.874 seconds
Requests per second: 22.89 [#/sec] (mean)
Time per request: 436.890 [ms] (mean)
And here is the result with size: 10 and lazy: false:
$ ab -n 20 -c 10 http://127.0.0.1:4000/
Time taken for tests: 0.619 seconds
Requests per second: 32.30 [#/sec] (mean)
Time per request: 309.564 [ms] (mean)

Celluloid & performant HTTP requests

I try to switch an existing crawler from EventMachine to Celluloid. To get in touch with Celluloid I've generated a bunch of static files with 150 kB per file on a linux box which are served via Nginx.
The code at the bottom should do its work, but there is a issues with the code which I don't understand: the code should spawn maximum 50 threads because of the thread pool size of 50 but it spawns 180 of them. If I increase the pool size to 100, 330 threads are spawned. What's going wrong there?
A simple copy & paste of this code should work on every box, so any hints are welcome :)
#!/usr/bin/env jruby
require 'celluloid'
require 'open-uri'
URLS = *(1..1000)
##requests = 0
##responses = 0
##total_size = 0
class Crawler
include Celluloid
def fetch(id)
uri = URI("http://data.asconix.com/#{id}")
puts "Request ##{##requests += 1} -> #{uri}"
begin
req = open(uri).read
rescue Exception => e
puts e
end
end
end
URLS.each_slice(50).map do |idset|
pool = Crawler.pool(size: 50)
crawlers = idset.to_a.map do |id|
begin
pool.future(:fetch, id)
rescue Celluloid::DeadActorError, Celluloid::MailboxError
end
end
crawlers.compact.each do |resp|
$stdout.print "Response ##{##responses += 1} -> "
if resp.value.size == 150000
$stdout.print "OK\n"
##total_size += resp.value.size
else
$stdout.print "ERROR\n"
end
end
pool.terminate
puts "Actors left: #{Celluloid::Actor.all.to_set.length} -- Alive: #{Celluloid::Actor.all.to_set.select(&:alive?).length}"
end
$stdout.print "Requests total: #{##requests}\n"
$stdout.print "Responses total: #{##responses}\n"
$stdout.print "Size total: #{##total_size} bytes\n"
By the way, the same issue occurs when I define the pool outside the each_slice loop:
....
#pool = Crawler.pool(size: 50)
URLS.each_slice(50).map do |idset|
crawlers = idset.to_a.map do |id|
begin
#pool.future(:fetch, id)
rescue Celluloid::DeadActorError, Celluloid::MailboxError
end
end
crawlers.compact.each do |resp|
$stdout.print "Response ##{##responses += 1} -> "
if resp.value.size == 150000
$stdout.print "OK\n"
##total_size += resp.value.size
else
$stdout.print "ERROR\n"
end
end
puts "Actors left: #{Celluloid::Actor.all.to_set.length} -- Alive: #{Celluloid::Actor.all.to_set.select(&:alive?).length}"
end
What ruby are you using? jRuby, Rubinius, etc? And what version of those?
Reason I ask is, threads are handled differently for each ruby. What you seem to be describing is added threads that come in for supervisors, and for tasks. Looking at the date of your post, it is likely that fibers are actually becoming native threads, which would make it seem like using jRuby perhaps. Also, using Futures often invokes the internal thread pool, which has nothing to do with your pool.
With both those reasons and others like them you could look for, that makes sense why you'd have a higher thread count than your pool calls for. This is a bit old though, so maybe you could follow up as to whether you still have this problem, and post output.

PSI - Statusing Web Service - Results not as expected

I'm trying to update Status information on assignments via Statusing Web Service (PSI). Problem is, that the results are not as expected. I'll try to explain what I'm doing in detail:
Two cases:
1) An assignment for the resource exists on specified tasks. I want to report work actuals (update status).
2) There is no assignment for the resource on specified tasks. I want to create the assignment and report work actuals.
I have one task in my project (Auto scheduled, Fixed work). Resource availability of all resources is set to 100%. They all have the same calendar.
Name: Task 31 - Fixed Work
Duration: 12,5 days?
Start: Thu 14.03.13
Finish: Tue 02.04.13
Resource Names: Resource 1
Work: 100 hrs
First I execute an UpdateStatus with the following ChangeXML
<Changes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<Proj ID="a8a601ce-f3ab-4c01-97ce-fecdad2359d9">
<Assn ID="d7273a28-c038-486b-b997-cdb2450ceef5" ResID="8a164257-7960-4b76-9506-ccd0efabdb72">
<Change PID="251658250">900000</Change>
</Assn>
</Proj>
</Changes>
Then I call a SubmitStatusForResource
client.SubmitStatusForResource(new Guid("8a164257-7960-4b76-9506-ccd0efabdb72"), null, "auto submit PSIStatusingGateway");
The following entry pops up in approval center (which is as I expected it):
Status Update; Task 31; Task update; Resource 1; 3/20/2012; 15h; 15%;
85h
Update in Project (still looks fine):
Task Name: Task 31 - Fixed Work
Duration: 12,5 days?
Start: Thu 14.03.13
Finish: Tue 02.04.13
Resource Names: Resource 1
Work: 100 hrs
Actual Work: 15 hrs
Remaining Work: 85 hrs
Then second case is executed: First I create a new assignment...
client.CreateNewAssignmentWithWork(
sName: Task 31 - Fixed Work,
projGuid: "a8a601ce-f3ab-4c01-97ce-fecdad2359d9",
taskGuid: "024d7b61-858b-40bb-ade3-009d7d821b3f",
assnGuid: "e3451938-36a5-4df3-87b1-0eb4b25a1dab",
sumTaskGuid: Guid.Empty,
dtStart: 14.03.2013 08:00:00,
dtFinish: 02.04.2013 15:36:00,
actWork: 900000,
fMilestone: false,
fAddToTimesheet: false,
fSubmit: false,
sComment: "auto commit...");
Then I call the UpdateStatus again:
<Changes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<Proj ID="a8a601ce-f3ab-4c01-97ce-fecdad2359d9">
<Assn ID="e3451938-36a5-4df3-87b1-0eb4b25a1dab" ResID="c59ad8e2-7533-47bd-baa5-f5b03c3c43d6">
<Change PID="251658250">900000</Change>
</Assn>
</Proj>
</Changes>
And finally the SubmitStatusForResource again
client.SubmitStatusForResource(new Guid("c59ad8e2-7533-47bd-baa5-f5b03c3c43d6"), null, "auto submit PSIStatusingGateway");
This creates the following entry in approval center:
Status Update; Task 31 - Fixed Work; New reassignment request;
Resource 2; 3/20/2012; 15h; 100%; 0h
I accept it and update my project:
Name: Task 31 - Fixed Work
Duration: 6,76 days?
Start: Thu 14.03.13
Finish: Mon 25.03.13
Resource Names: Resource 1;Resource 2
Work: 69,05 hrs
Actual Work: 30 hrs
Remaining Work: 39,05 hrs
And I really don't get, why the new work would be 69,05 hours. The results I expected would have been:
Name: Task 31 - Fixed Work
Duration: 6,76 days?
Start: Thu 14.03.13
Finish: Mon 25.03.13
Resource Names: Resource 1;Resource 2
Work: 65 hrs
Actual Work: 30 hrs
Remaining Work: 35 hrs
I've spend quite a bunch of time, trying to find out, how to update the values to get the results that I want. I really would appreciate some help. This makes me want to rip my hair out!
Thanks in advance
PS: Forgot to say that I'm working with MS Project Server 2010 and MS Project Professional 2010

Resources