Can we have asynchronous behaviour without threads? - asynchronous

Discussions on asynchronous features of a program generally move towards things like Futures,Promises etc which in turn involves multi threaded environment.
Is it possible to write an asynchronous program without resorting to multiple threads?

You can't have async without multiple workers.
Even when you don't control them directly (Eg: such as NodeJS), they still exists on the background. So on those languages, you can use them without explicitly working with threads / forks.
Eg:
var fs = require("fs");
fs.readFile('example.txt',function(err,data){
if(!err) {
console.log(data);
}
}); // 'fs.readFile' is async
console.log("something else"); // This will be executed right after the line above, and most likely the file ain't read yet.

Related

Single threaded asynchronous event loop with `winit`

I'm trying to build an NES emulator using winit, which entails building a game loop which should run exactly 60 times per second.
At first, I used std::thread to create a separate thread where the game loop would run and wait 16 milliseconds before running again. This worked quite well, until I tried to compile the program again targeting WebAssembly. I then found out that both winit::window::Window and winit::event_loop::EventLoopProxy are not Send when targeting Wasm, and that std::thread::spawn panics in Wasm.
After some struggle, I decided to try to do the same thing using task::spawn_local from one of the main asynchronous runtimes. Ultimately, I went with async_std.
I'm not used to asynchronous programming, so I'm not even sure if what I'm trying to do could work.
My idea is to do something like this:
use winit::{window::WindowBuilder, event_loop::EventLoop};
use std::time::Duration;
fn main() {
let event_loop = EventLoop::new();
let _window = WindowBuilder::new()
.build(&event_loop);
async_std::task::spawn_local(async {
// game loop goes here
loop {
// [update game state]
// [update frame buffer]
// [send render event with EventLoopProxy]
async_std::task::sleep(Duration::from_millis(16)).await;
// ^ note: I'll be using a different sleep function with Wasm
}
});
event_loop.run(move |event, _, control_flow| {
control_flow.set_wait();
match event {
// ...
_ => ()
}
});
}
The problem with this approach is that the game loop will never run. If I'm not mistaken, some asynchronous code in the main thread would need to be blocked (by calling .await) for the runtime to poll other Futures, such as the one spawned by the spawn_local function. I can't do this easily, since event_loop.run is not asynchronous.
Having time to await other events shouldn't be a problem, since the control flow is set to wait.
Testing this on native code, nothing inside the game loop ever runs. Testing this on Wasm code (with wasm_timer::Delay as the sleep function), the game loop does run, but at a very low framerate and with long intervals of halting.
Having explained my situation, I would like to ask: is it possible to do what I'm trying to do, and if it is, how would I approach it? I will also accept answers telling me how I could try to do this differently, such as by using web workers.
Thanks in advance!

ASP.NET Core, overkilling Task.Run()?

Lets say we have an ASP.NET Core receiving a string as a payload, size order of couple of megabytes. First method implementation:
[HttpPost("updateinfos")]
public async Task UpdateInfos()
{
var len = (int)this.Request.ContentLength;
byte[] b = new byte[len];
await this.Request.Body.ReadAsync(b,0,len);
var content = Encoding.UTF8.GetString(b);
.....
}
Body is read with ReadAsync, this is good since we have I/O stuff on socket and having it asynchronous is for free due to the nature of the call itself. But if we have a look after, the GetString() method, is purely CPU, is blocking with linear complexity. Anyway this affect somehow the performance since other clients wait for my bytes to get converted in string. I think to avoid this the solution is to run GetString() on the thread pool, by this:
[HttpPost("updateinfos")]
public async Task UpdateInfos()
{
var len = (int)this.Request.ContentLength;
byte[] b = new byte[len];
await this.Request.Body.ReadAsync(b,0,len);
var content = await Task.Run(()=>ASCIIEncoding.UTF8.GetString(b));
.....
}
please don't mind the return right now, something more has to be done in the function.
So the question, is the second approach overkilling? If so, what is the boundary to discriminate what could be run as blocking and what has to be moved to another thread?
You are very much abusing Task.Run there. Task.Run is used to off-load work onto a different thread and asynchronously wait for it to complete. So every Task.Run call will cause thread context switches. Of course, that is usually a very bad idea to do for things that should not run on their own thread.
Things like ASCIIEncoding.UTF8.GetString(b) are really fast. The overhead involved in creating and managing a thread that encapsulates this is much larger than just executing this directly on the same thread.
You should generally use Task.Run only to off-load (primarily synchronous) work that can benefit from running on its own thread. Or in cases, where you have work that would take a bit longer but block the current execution.
In your example, that is simply not the case, so you should just call those methods synchronously.
If you really want to reduce the work for that code, you should look at how to work properly streams. What you do in your code is read the request body stream completely and only then you work on it (trying to translate into a string).
Instead of separating the process of reading the binary stream and then translating it into a string, you could just read the stream as a string directly using a StreamReader. That way, you can read the request body directly, even asynchronously. So depending on what you actually do with it afterwards, that may be a lot more efficient.

Purely functional feedback suppression?

I have a problem that I can solve reasonably easy with classic imperative programming using state: I'm writing a co-browsing app that shares URL's between several nodes. The program has a module for communication that I call link and for browser handling that I call browser. Now when a URL arrives in link i use the browser module to tell the
actual web browser to start loading the URL.
The actual browser will trigger the navigation detection that the incoming URL has started to load, and hence will immediately be presented as a candidate for sending to the other side. That must be avoided, since it would create an infinite loop of link-following to the same URL, along the line of the following (very conceptualized) pseudo-code (it's Javascript, but please consider that a somewhat irrelevant implementation detail):
actualWebBrowser.urlListen.gotURL(function(url) {
// Browser delivered an URL
browser.process(url);
});
link.receivedAnURL(function(url) {
actualWebBrowser.loadURL(url); // will eventually trigger above listener
});
What I did first wast to store every incoming URL in browser and simply eat the URL immediately when it arrives, then remove it from a 'received' list in browser, along the lines of this:
browser.recents = {} // <--- mutable state
browser.recentsExpiry = 40000;
browser.doSend = function(url) {
now = (new Date).getTime();
link.sendURL(url); // <-- URL goes out on the network
// Side-effect, mutating module state, clumsy clean up mechanism :(
browser.recents[url] = now;
setTimeout(function() { delete browser.recents[url] }, browser.recentsExpiry);
return true;
}
browser.process = function(url) {
if(/* sanity checks on `url`*/) {
now = (new Date).getTime();
var duplicate = browser.recents[url];
if(! duplicate) return browser.doSend(url);
if((now - duplicate_t) > browser.recentsExpiry) {
return browser.doSend(url);
}
return false;
}
}
It works but I'm a bit disappointed by my solution because of my habitual use of mutable state in browser. Is there a "Better Way (tm)" using immutable data structures/functional programming or the like for a situation like this?
A more functional approach to handling long-lived state is to use it as a parameter to a recursive function, and have one execution of the function responsible for handling a single "action" of some kind, then calling itself again with the new state.
F#'s MailboxProcessor is one example of this kind of approach. However it does depend on having the processing happen on an independent thread which isn't the same as the event-driven style of your code.
As you identify, the setTimeout in your code complicates the state management. One way you could simplify this out is to instead have browser.process filter out any timed-out URLs before it does anything else. That would also eliminate the need for the extra timeout check on the specific URL it is processing.
Even if you can't eliminate mutable state from your code entirely, you should think carefully about the scope and lifetime of that state.
For example might you want multiple independent browsers? If so you should think about how the recents set can be encapsulated to just belong to a single browser, so that you don't get collisions. Even if you don't need multiple ones for your actual application, this might help testability.
There are various ways you might keep the state private to a specific browser, depending in part on what features the language has available. For example in a language with objects a natural way would be to make it a private member of a browser object.

How to change the dart-sqlite code from synchronous style to asynchronous?

I'm trying to use Dart with sqlite, with this project dart-sqlite.
But I found a problem: the API it provides is synchronous style. The code will be looked like:
// Iterating over a result set
var count = c.execute("SELECT * FROM posts LIMIT 10", callback: (row) {
print("${row.title}: ${row.body}");
});
print("Showing ${count} posts.");
With such code, I can't use Dart's future support, and the code will be blocking at sql operations.
I wonder how to change the code to asynchronous style? You can see it defines some native functions here: https://github.com/sam-mccall/dart-sqlite/blob/master/lib/sqlite.dart#L238
_prepare(db, query, statementObject) native 'PrepareStatement';
_reset(statement) native 'Reset';
_bind(statement, params) native 'Bind';
_column_info(statement) native 'ColumnInfo';
_step(statement) native 'Step';
_closeStatement(statement) native 'CloseStatement';
_new(path) native 'New';
_close(handle) native 'Close';
_version() native 'Version';
The native functions are mapped to some c++ functions here: https://github.com/sam-mccall/dart-sqlite/blob/master/src/dart_sqlite.cc
Is it possible to change to asynchronous? If possible, what shall I do?
If not possible, that I have to rewrite it, do I have to rewrite all of:
The dart file
The c++ wrapper file
The actual sqlite driver
UPDATE:
Thanks for #GregLowe's comment, Dart's Completer can convert callback style to future style, which can let me to use Dart's doSomething().then(...) instead of passing a callback function.
But after reading the source of dart-sqlite, I realized that, in the implementation of dart-sqlite, the callback is not event-based:
int execute([params = const [], bool callback(Row)]) {
_checkOpen();
_reset(_statement);
if (params.length > 0) _bind(_statement, params);
var result;
int count = 0;
var info = null;
while ((result = _step(_statement)) is! int) {
count++;
if (info == null) info = new _ResultInfo(_column_info(_statement));
if (callback != null && callback(new Row._internal(count - 1, info, result)) == true) {
result = count;
break;
}
}
// If update affected no rows, count == result == 0
return (count == 0) ? result : count;
}
Even if I use Completer, it won't increase the performance. I think I may have to rewrite the c++ code to make it event-based first.
You should be able to write a wrapper without touching the C++. Have a look at how to use the Completer class in dart:async. Basically you need to create a Completer, return Completer.future immediately, and then call Completer.complete(row) from the existing callback.
Re: update. Have you seen this article, specifically the bit about asynchronous extensions? i.e. If the C++ API is synchronous you can run it in a separate thread, and use messaging to communicate with it. This could be a way to do it.
The big problem you've got is that SQLite is an embedded database; in order to process your query and provide your results, it must do computation (and I/O) in your process. What's more, in order for its transaction handling system to work, it either needs its connection to be in the thread that created it, or for you to run in serialized mode (with a performance hit).
Because these are fairly hard constraints, your plan of switching things to an asynchronous operation mode is unlikely to go well except by using multiple threads. Since using multiple connections complicates things a lot (as you can't share some things between them, such as TEMP TABLEs) let's consider going for a single serialized connection; all activity will be serialized at the DB level, but for an application that doesn't use the DB a lot it will be OK. At the C++ level, you'd be talking about calling that execute from another thread and then sending messages back to the caller thread to indicate each row and the completion.
But you'll take a real hit when you do this; in particular, you're committing to only doing one query at a time, as the technique runs into significant problems with semantic effects when you start using two connections at once and the DB forces serialization on you with one connection.
It might be simpler to do the above by putting the synchronous-asynchronous coupling at the Dart level by managing the worker thread and inter-thread communication there. That would let you avoid having to change the C++ code significantly. I don't know Dart well enough to be able to give much advice there.
Myself, I'd just stick with synchronous connection processing so that I can make my application use multi-threaded mode more usefully. I'd be taking the hit with the semantics and giving each thread its own connection (possibly allocated lazily) so that overall speed was better, but I do come from a programming community that regards threads as relatively heavyweight resources, so make of that what you will. (Heavy threads can do things that reduce the number of locks they need that it makes no sense to try to do with light threads; it's about overhead management.)

Is asynchronous call of JavaScript function possible in Windows Script Host?

Imagine you have a simple function in Windows Script Host (JScript) environment:
function say(text) {
WScript.Sleep(5000);
WScript.Echo(text);
}
Is it possible to call say() asynchronously?
Note: Such browser-based methods as setInterval() or setTimeout are not available in WSH.
No, Windows Script Host doesn't support calling script functions asynchronously. You'll have to run two scripts simultaneously to achieve this effect:
// [main.js]
var oShell = new ActiveXObject("WScript.Shell");
oShell.Run(WScript.FullName + " say.js Hello");
WScript.Echo("Hello from main");
// [say.js]
WScript.Sleep(5000);
WScript.Echo(WScript.Arguments.Item(0));
As far as I know, there is no equivalent to setTimeout / setInterval under Windows Script Host (shockingly). However, you may find this simple function queue in another answer here on SO a useful starting point for emulating it. Basically what the guy did (his name is also "TJ", but it's not me) was create a function queue, and then you call its main loop as your main method. The main loop emulates the heartbeat in browser-based implementations. Quite clever, though I'd change the naming a bit.

Resources