I have a reasonably simple code using TcpStream and SslStream around it, reading the socket line by line with BufReader. Sometime the iterator just stops returning any data with Ok(0):
let mut stream = TcpStream::connect((self.config.host.as_ref(), self.config.port)).unwrap();
if self.config.ssl {
let context = ssl::SslContext::new(ssl::SslMethod::Tlsv1_2).unwrap();
let mut stream = ssl::SslStream::connect(&context, stream).unwrap();
self.stream = Some(ssl::MaybeSslStream::Ssl(stream));
} else {
self.stream = Some(ssl::MaybeSslStream::Normal(stream));
}
...
let read_stream = clone_stream(&self.stream);
let line_reader = BufReader::new(read_stream);
for line in line_reader.lines() {
match line {
Ok(line) => {
...
}
Err(e) => panic!("line read failed: {}", e),
}
}
println!("lines out, {:?}", self.stream);
The loop just stops randomly, as far as I can see and there's no reason to believe socket was closed server-side. Calling self.stream.as_mut().unwrap().read_to_end(&mut buf) after loop ended returns Ok(0).
Any advice on how this is expected to be handled? I don't get any Err so I can assume socket is still alive but then I can't read anything from it. What is the current state of the socket and how should I proceed?
PS: I'm providing the implementation of clone_stream as the reference, as advied by commenter.
fn clone_stream(stream: &Option<ssl::MaybeSslStream<TcpStream>>) -> ssl::MaybeSslStream<TcpStream> {
if let &Some(ref s) = stream {
match s {
&ssl::MaybeSslStream::Ssl(ref s) => ssl::MaybeSslStream::Ssl(s.try_clone().unwrap()),
&ssl::MaybeSslStream::Normal(ref s) => ssl::MaybeSslStream::Normal(s.try_clone().unwrap()),
}
} else {
panic!();
}
}
Surprisingly, this was a "default timeout" on client I guess (socket was moving into CLOSE_WAIT state).
I fixed it by first adding:
stream.set_read_timeout(Some(Duration::new(60*5, 0)));
stream.set_write_timeout(Some(Duration::new(60*5, 0)));
that made the iterator fail with ErrorKind::WouldBlock on timeout, at which point I added a code to send a ping packet over the wire, the next iteration worked exactly as expected.
Related
I'm trying to write a parallel data loader for deep learning in Rust. The task is to write an iterator that under the hood does the following
Reads files from disk and applies some compute-heavy preprocessing to them, the result is generally a numeric array (or multiple)
Groups the results of the previous step into batches of size B and "collates" them - this generally means just concatenating the arrays - moderately compute heavy
Yields the results from step 2.
Step 1 can be both IO and compute bound, depending on network latency, size of files and complexity of preprocessing. It has to be run in parallel by many workers. Step 2 should be off the main thread but likely doesn't need a pool of workers. Step 3 happens on main thread (exposed to Python).
The reason I write it in Rust is that Python offers two options: pure Python implementation shipped with PyTorch, based on multiprocessing, which is somewhat slow but very flexible (arbitrary user-defined data preprocessing and batching) and C++ implementation shipped with Tensorflow, which is assembled by the user from a set of predefined primitives. The latter is substantially faster but too restrictive for the kinds of data processing I wish to do. I expect that Rust will give me the speed of Tensorflow with flexibility of arbitrary code as in PyTorch.
My question is purely about the way to implement parallelism. The ideal setup is to have N workers for step 1) -> channel -> worker for step 2) -> channel -> step 3. Because the iterator object may be dropped at any time, there is a strict requirement to be able to terminate the whole scheme after Drop. On the other hand, there is the flexibility of loading the files in an arbitrary order: for example if the batch size B == 16 and max_n_threads == 32, it is perfectly fine to start 32 workers and yield the first batch containing the 16 examples which happen to return first. This can be exploited for speed.
My naive implementation creates the DataLoader in 3 steps:
Create a n_working: Arc<AtomicUsize> to control the number of worker threads active and should_shutdown: Arc<AtomicBool> to signal shutdown (when Drop is called)
Create a thread responsible for maintaining the pool. It spins on n_working < max_n_threads and keeps spawning worker threads which terminate on should_shutdown, otherwise fetch a single example, send it down the worker->batcher channel and decrement n_working
Create a batching thread which polls the worker->batcher channel, upon receiving B objects concatenates them into a batch and sends down the batcher->yielder channel
#[pyclass]
struct DataLoader {
collate_worker: Option<thread::JoinHandle<()>>,
example_worker: Option<thread::JoinHandle<()>>,
should_shut_down: Arc<AtomicBool>,
receiver: Receiver<Batch>,
length: usize,
}
impl DataLoader {
fn new(
dataset: Dataset,
batch_size: usize,
capacity: usize,
) -> Self {
let n_batches = dataset.len() / batch_size;
let max_n_threads = capacity * batch_size;
let (example_sender, collate_receiver) = bounded((batch_size - 1) * capacity);
let should_shut_down = Arc::new(AtomicBool::new(false));
let shutdown_flag = should_shut_down.clone();
let example_worker = thread::spawn(move || {
rayon::scope_fifo(|s| {
let dataset = &dataset;
let n_working = Arc::new(AtomicUsize::new(0));
let mut current_index = 0;
while current_index < n_batches * batch_size {
if n_working.load(Ordering::Relaxed) == max_n_threads {
continue;
}
if shutdown_flag.load(Ordering::Relaxed) {
break;
}
let index = current_index.clone();
let sender = example_sender.clone();
let counter = n_working.clone();
let shutdown_flag = shutdown_flag.clone();
s.spawn_fifo(move |_s| {
let example = dataset.get_example(index);
if !shutdown_flag.load(Ordering::Relaxed) {
_ = sender.send(example);
} // if we should shut down, skip sending
counter.fetch_sub(1, Ordering::Relaxed);
});
current_index += 1;
n_working.fetch_add(1, Ordering::Relaxed);
};
});
});
let (batch_sender, final_receiver) = bounded(capacity);
let shutdown_flag = should_shut_down.clone();
let collate_worker = thread::spawn(move || {
'outer: loop {
let mut batch = vec![];
for _ in 0..batch_size {
if let Ok(example) = collate_receiver.recv() {
batch.push(example);
} else {
break 'outer;
}
};
let collated = collate(batch);
if shutdown_flag.load(Ordering::Relaxed) {
break; // skip sending
}
_ = batch_sender.send(collated);
};
});
Self {
collate_worker: Some(collate_worker),
example_worker: Some(example_worker),
should_shut_down: should_shut_down,
receiver: final_receiver,
length: n_batches,
}
}
}
#[pymethods]
impl DataLoader {
fn __iter__(slf: PyRef<Self>) -> PyRef<Self> { slf }
fn __next__(&mut self) -> Option<Batch> {
self.receiver.recv().ok()
}
fn __len__(&self) -> usize {
self.length
}
}
impl Drop for DataLoader {
fn drop(&mut self) {
self.should_shut_down.store(true, Ordering::Relaxed);
if self.collate_worker.take().unwrap().join().is_err() {
println!("Panic in collate worker");
};
if self.example_worker.take().unwrap().join().is_err() {
println!("Panic in example_worker");
};
println!("dropped the dataloader");
}
}
This implementation works and roughly matches the performance of PyTorch but provides no significant speedup. I don't know where to look for improvements, but I imagine it would help to have the thing load-balance automatically in a work-stealing way and to flexibly spawn workers depending on the proportion of IO and compute time. I am also expecting performance issues due to the spinning pool manager and likely corner cases in my handling of Drop.
My question is how to best approach the problem. I am generally unsure if this should be tackled with parallel crates like rayon, async crates like tokio, or a mix of both. I also have the hunch my implementation could be much simpler with the correct use of their combinators/higher order APIs. I tried with rayon but I couldn't get a solution which doesn't wastefully enforce the original sequential returning order and respects the Drop requirement.
Okay I think I've figured out a solution for you that uses rayon parallel iterators.
The trick is to use Results in the rayon iterators, and return Err if the cancellation flag is set.
I first created a utility type to create a cancellable thread in which you can execute rayon iterators. You use it by passing in the thread closure which takes the atomic cancellation token as a parameter. Then you have to check if the cancellation token is true, and if so, exit early.
use std::sync::Arc;
use std::sync::atomic::{Ordering, AtomicBool};
use std::thread::JoinHandle;
fn collate(batch: &[Computed]) -> Batch {
batch.iter().map(|&x| i128::from(x)).sum()
}
#[derive(Debug)]
struct Cancelled;
struct CancellableThread<Output: Send + 'static> {
cancel_token: Arc<AtomicBool>,
thread: Option<JoinHandle<Result<Output, Cancelled>>>,
}
impl<Output: Send + 'static> CancellableThread<Output> {
fn new<F: FnOnce(Arc<AtomicBool>) -> Result<Output, Cancelled> + Send + 'static>(init: F) -> Self {
let cancel_token = Arc::new(AtomicBool::new(false));
let thread_cancel_token = Arc::clone(&cancel_token);
CancellableThread {
thread: Some(std::thread::spawn(move || init(thread_cancel_token))),
cancel_token,
}
}
fn output(mut self) -> Output {
self.thread.take().unwrap().join().unwrap().unwrap()
}
}
impl<Output: Send + 'static> Drop for CancellableThread<Output> {
fn drop(&mut self) {
self.cancel_token.store(true, Ordering::Relaxed);
if let Some(thread) = self.thread.take() {
let _ = thread.join().unwrap();
}
}
}
I found it useful to create a closure that returns a Result<(), Cancelled> so I could use the try operator (?) to exit early.
CancellableThread::new(move |cancel_token| {
let cancelled = || if cancel_token.load(Ordering::Relaxed) {
Err(Cancelled)
} else {
Ok(())
};
loop {
// was the thread dropped?
// if so, stop what we're doing
cancelled?;
// do stuff and
// eventually return a result
}
});
I then used that CancellableThread abstraction in the DataLoader. No need to create a special Drop impl for it, because by default, it will call drop on each field anyways, which will handle the cancellation.
type Data = Vec<u8>;
type Dataset = Vec<Data>;
type Computed = u64;
type Batch = i128;
use rayon::prelude::*;
use crossbeam::channel::{unbounded, Receiver};
struct DataLoader {
example_worker: CancellableThread<()>,
collate_worker: CancellableThread<()>,
receiver: Receiver<Batch>,
length: usize,
}
I used unbounded channels, as it was one less thing to bother about. It shouldn't be hard to switch to bounded ones instead.
impl DataLoader {
fn new(dataset: Dataset, batch_size: usize) -> Self {
let (example_sender, collate_receiver) = unbounded();
let (batch_sender, final_receiver) = unbounded();
I'm not sure if you can always guarantee that the number of items in your dataset will be a multiple of the batch_size, so I decided to handle that explicitly.
let length = if dataset.len() % batch_size == 0 {
dataset.len() / batch_size
} else {
dataset.len() / batch_size + 1
};
I created the collating worker first, though that may not be necessary. As you can see, I had to duplicate a little bit to handle partial batches.
let collate_worker = CancellableThread::new(move |cancel_token| {
let cancelled = || if cancel_token.load(Ordering::Relaxed) {
Err(Cancelled)
} else {
Ok(())
};
'outer: loop {
let mut batch = Vec::with_capacity(batch_size);
for _ in 0..batch_size {
cancelled()?;
if let Ok(data) = collate_receiver.recv() {
batch.push(data);
} else {
if !batch.is_empty() {
// handle the last batch, if there
// weren't enough items to fill it
let collated = collate(&batch);
cancelled()?;
batch_sender.send(collated).unwrap();
}
break 'outer;
}
}
let collated = collate(&batch);
cancelled()?;
batch_sender.send(collated).unwrap();
}
Ok(())
});
The example worker is where things are really made much simpler, because we can just use rayon parallel iterators. As you can see, we check for cancellation before each heavy computation.
let example_worker = CancellableThread::new(move |cancel_token| {
let cancelled = || if cancel_token.load(Ordering::Relaxed) {
Err(Cancelled)
} else {
Ok(())
};
let heavy_compute = |data: Data| -> Result<Computed, Cancelled> {
cancelled()?;
Ok(data.iter().map(|&x| u64::from(x)).product())
};
dataset
.into_par_iter()
.map(heavy_compute)
.try_for_each(|computed| {
example_sender.send(computed?).unwrap();
Ok(())
})
});
Then we just construct the DataLoader. You can see the Python impl is identical:
DataLoader {
example_worker,
collate_worker,
receiver: final_receiver,
length,
}
}
}
// #[pymethods]
impl DataLoader {
fn __iter__(this: Self /* PyRef<Self> */) -> Self /* PyRef<Self> */ { this }
fn __next__(&mut self) -> Option<Batch> {
self.receiver.recv().ok()
}
fn __len__(&self) -> usize {
self.length
}
}
playground
I was unsure if I should post this here or in code review.
Code review seems to have only functioning code.
So I've a multitude of problems I don't really understand.
(I’m a noob) full code can be found here: https://github.com/NicTanghe/winder/blob/main/src/main.rs
main problem is here:
let temp = location_loc1.parent().unwrap();
location_loc1.push(&temp);
I’ve tried various things to get around problems with borrowing as mutable or as reference,
and I can’t seem to get it to work.
I just get a different set of errors with everything I try.
Furthermore, I'm sorry if this is a duplicate, but looking for separate solutions to the errors just gave me a different error. In a circle.
Full function
async fn print_events(mut selector_loc1:i8, location_loc1: PathBuf) {
let mut reader = EventStream::new();
loop {
//let delay = Delay::new(Duration::from_millis(1_000)).fuse();
let mut event = reader.next().fuse();
select! {
// _ = delay => {
// print!("{esc}[2J{esc}[1;1H{}", esc = 27 as char,);
// },
maybe_event = event => {
match maybe_event {
Some(Ok(event)) => {
//println!("Event::{:?}\r", event);
// if event == Event::Mouse(MouseEvent::Up("Left").into()) {
// println!("Cursor position: {:?}\r", position());
// }
print!("{esc}[2J{esc}[1;1H{}", esc = 27 as char,);
if event == Event::Key(KeyCode::Char('k').into()) {
if selector_loc1 > 0 {
selector_loc1 -= 1;
};
//println!("go down");
//println!("{}",selected)
} else if event == Event::Key(KeyCode::Char('j').into()) {
selector_loc1 += 1;
//println!("go up");
//println!("{}",selected)
} else if event == Event::Key(KeyCode::Char('h').into()) {
//-----------------------------------------
//-------------BackLogic-------------------
//-----------------------------------------
let temp = location_loc1.parent().unwrap();
location_loc1.push(&temp);
//------------------------------------------
//------------------------------------------
} else if event == Event::Key(KeyCode::Char('l').into()) {
//go to next dir
} if event == Event::Key(KeyCode::Esc.into()) {
break;
}
printtype(location_loc1,selector_loc1);
}
Some(Err(e)) => println!("Error: {:?}\r", e),
None => break,
}
}
};
}
}
also, it seems using
use async_std::path::{Path, PathBuf};
makes the rust not recognize unwrap() function → how would I use using ?
There are two problems with your code.
Your PathBuf is immutable. It's not possible to modify immutable objects, unless they support interior mutability. PathBuf does not. Therefore you have to make your variable mutable. You can either add mut in front of it like that:
async fn print_events(mut selector_loc1:i8, mut location_loc1: PathBuf) {
Or you can re-bind it:
let mut location_loc1 = location_loc1;
You cannot have borrow it both mutable and immutably - the mutable borrows are exclusive! Given that the method .parent() borrows the buffer, you have to create a temporary owned value:
// the PathBuf instance
let mut path = PathBuf::from("root/parent/child");
// notice the .map(|p| p.to_owned()) method - it helps us avoid the immutable borrow
let parent = path.parent().map(|p| p.to_owned()).unwrap();
// now it's fine to modify it, as it's not borrowed
path.push(parent);
Your second question:
also, it seems using use async_std::path::{Path, PathBuf}; makes the rust not recognize unwrap() function → how would I use using ?
The async-std version is just a wrapper over std's PathBuf. It just delegates to the standard implementation, so it should not behave differently
// copied from async-std's PathBuf implementation
pub struct PathBuf {
inner: std::path::PathBuf,
}
Problem
I have implemented a function that return an array of missing indexes from packets sent. If 100 packets are sent, then the server will have a vector of indexes with 0 (missing), and 1 (not missing). I don't want to trigger this every time, only when there is a slight delay where no packet is received. I want to change my synchronous function to an asynchronous debouncing function
My attempt to solve debouncing issue
I am looking for a solution to implement a timer (like 300ms) that will have its value constantly overwritten by different threads. Once its value is no longer overwritten, it should trigger a block of code or function. I am using Tokio.
This is pseudo code of what I want to achieve:
// thanks https://stackoverflow.com/questions/26593387/how-can-i-get-the-current-time-in-milliseconds
fn get_epoch() -> u128 {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_millis()
}
impl Server {
async fn run(self) -> Result<(), io::Error> {
let Server {
socket,
mut buf,
mut to_send,
} = self;
let mut timer_delay = get_epoch();
loop {
if let Some((size, peer)) = to_send {
timer_delay = get_epoch(); // "reset" the to a closer value
}
futures::join!(
/* execute a block of code if true*/
if get_epoch() - timer_delay > 300,
/* else (default case):*/
to_send = Some(socket.recv_from(&mut buf)
);
}
}
}
I based my project on the following example from Tokio:
impl Server {
async fn run(self) -> Result<(), io::Error> {
let Server {
socket,
mut buf,
mut to_send,
} = self;
loop {
// First we check to see if there's a message we need to echo back.
// If so then we try to send it back to the original source, waiting
// until it's writable and we're able to do so.
if let Some((size, peer)) = to_send {
let amt = socket.send_to(&buf[..size], &peer).await?;
println!("Echoed {}/{} bytes to {}", amt, size, peer);
}
// If we're here then `to_send` is `None`, so we take a look for the
// next message we're going to echo back.
to_send = Some(socket.recv_from(&mut buf).await?);
}
}
}
Spawn another Tokio task for debouncing that will listen to a channel. You can tell when the channel hasn't received anything in a while by using a timeout. When the timeout occurs, that's the signal that you should perform your infrequent action. Don't forget to perform that action when the channel closes as well:
use std::time::Duration;
use tokio::{sync::mpsc, task, time}; // 1.3.0
#[tokio::main]
async fn main() {
let (debounce_tx, mut debounce_rx) = mpsc::channel(10);
let (network_tx, mut network_rx) = mpsc::channel(10);
// Listen for events
let debouncer = task::spawn(async move {
let duration = Duration::from_millis(10);
loop {
match time::timeout(duration, debounce_rx.recv()).await {
Ok(Some(())) => {
eprintln!("Network activity")
}
Ok(None) => {
eprintln!("Debounce finished");
break;
}
Err(_) => {
eprintln!("{:?} since network activity", duration)
}
}
}
});
// Listen for network activity
let server = task::spawn({
let debounce_tx = debounce_tx.clone();
async move {
while let Some(packet) = network_rx.recv().await {
// Received a packet
debounce_tx
.send(())
.await
.expect("Unable to talk to debounce");
eprintln!("Received a packet: {:?}", packet);
}
}
});
// Prevent deadlocks
drop(debounce_tx);
// Drive the network input
network_tx.send(1).await.expect("Unable to talk to network");
network_tx.send(2).await.expect("Unable to talk to network");
network_tx.send(3).await.expect("Unable to talk to network");
time::sleep(Duration::from_millis(20)).await;
network_tx.send(4).await.expect("Unable to talk to network");
network_tx.send(5).await.expect("Unable to talk to network");
network_tx.send(6).await.expect("Unable to talk to network");
time::sleep(Duration::from_millis(20)).await;
// Close the network
drop(network_tx);
// Wait for everything to finish
server.await.expect("Server panicked");
debouncer.await.expect("Debouncer panicked");
}
Received a packet: 1
Received a packet: 2
Received a packet: 3
Network activity
Network activity
Network activity
10ms since network activity
10ms since network activity
Received a packet: 4
Received a packet: 5
Received a packet: 6
Network activity
Network activity
Network activity
10ms since network activity
Debounce finished
I'm trying to send and receive simultaneously to a multicast IP with Rust.
use futures::executor::block_on;
use async_std::task;
use std::{net::{UdpSocket, Ipv4Addr}, time::{Duration, Instant}};
fn main() {
let future = async_main();
block_on(future);
}
async fn async_main() {
let mut socket = UdpSocket::bind("0.0.0.0:8888").unwrap();
let multi_addr = Ipv4Addr::new(234, 2, 2, 2);
let inter = Ipv4Addr::new(0,0,0,0);
socket.join_multicast_v4(&multi_addr,&inter);
let async_one = first(&socket);
let async_two = second(&socket);
futures::join!(async_one, async_two);
}
async fn first(socket: &std::net::UdpSocket) {
let mut buf = [0u8; 65535];
let now = Instant::now();
loop {
if now.elapsed().as_secs() > 10 { break; }
let (amt, src) = socket.recv_from(&mut buf).unwrap();
println!("received {} bytes from {:?}", amt, src);
}
}
async fn second(socket: &std::net::UdpSocket) {
let now = Instant::now();
loop {
if now.elapsed().as_secs() > 10 { break; }
socket.send_to(String::from("h").as_bytes(), "234.2.2.2:8888").unwrap();
}
}
The issue with this is first it runs the receive function and then it runs the send function, it never sends and receives simultaneously. With Golang I can do this with Goroutines but I'm finding this quite difficult in Rust.
I'm not very experienced with async in Rust, but your first() and second() functions don't appear to have any asynchronous calls in them -- in other words, there are not any calls that use .await. My understanding is that if nothing is awaited, then the functions will run synchronously, and I believe you get a compiler warning about it as well.
It doesn't look like std::net::UdpSocket provides any async methods that can be awaited, and you need to use async_std::net::UdpSocket instead.
This is node.js' end implementation:
OutgoingMessage.prototype.end = function(data, encoding) {
if (this.finished) {
return false;
}
if (!this._header) {
this._implicitHeader();
}
if (data && !this._hasBody) {
console.error('This type of response MUST NOT have a body. ' +
'Ignoring data passed to end().');
data = false;
}
var ret;
var hot = this._headerSent === false &&
typeof(data) === 'string' &&
data.length > 0 &&
this.output.length === 0 &&
this.connection &&
this.connection.writable &&
this.connection._httpMessage === this;
if (hot) {
// Hot path. They're doing
// res.writeHead();
// res.end(blah);
// HACKY.
if (this.chunkedEncoding) {
var l = Buffer.byteLength(data, encoding).toString(16);
ret = this.connection.write(this._header + l + CRLF +
data + '\r\n0\r\n' +
this._trailer + '\r\n', encoding);
} else {
ret = this.connection.write(this._header + data, encoding);
}
this._headerSent = true;
} else if (data) {
// Normal body write.
ret = this.write(data, encoding);
}
if (!hot) {
if (this.chunkedEncoding) {
ret = this._send('0\r\n' + this._trailer + '\r\n'); // Last chunk.
} else {
// Force a flush, HACK.
ret = this._send('');
}
}
this.finished = true;
// There is the first message on the outgoing queue, and we've sent
// everything to the socket.
if (this.output.length === 0 && this.connection._httpMessage === this) {
debug('outgoing message end.');
this._finish();
}
return ret;
};
Source: https://github.com/joyent/node/blob/master/lib/http.js#L645
Apparently, the connection is only "finished" when output.length === 0.
So, if there is still data waiting to be written, and the receiving client for some reason is dodgy about receiving this data, will the request ever be ended?
I have also seen such issue where an end is not effective while trying to end a http request made by a flash uploader. I ended up doing the following, which did help:
res.end(failureJSON, 'utf8');
req.once('end', function _destroyConn() {
req.connection.destroy();
});
Seems very hackish. Anyway, is such behavior with req.connection.destroy needed to guarantee a disconnection from the socket?
Unfortunately, res.end() does not directly “guarantee a disconnection of the socket” because it needs to account for HTTP Keep-Alive. According to the docs, end tells the server that everything has been sent, and that the response is complete. It’s entirely up to the server object whether or not to disconnect immediately.
To answer your question more specifically, the important thing is that the response needs to emit a finish event. If you take a look at the implementation of _finish(), it pretty much just emits the event.
As you noted though, it doesn’t always call _finish() directly…but it did set this.finished = true. When _flush() executes, it sends any remaining data and THEN calls _finish().
It’s kind of complicated, and I don’t think I can go into any more detail without the risk of being wrong.
As far as connections sometimes not closing, have you checked if you have keep-alive configured properly? If the HTTP connection is set up with keep-alive by default, then calling end will not close the socket.
If your print out res.shouldKeepAlive, it will tell you if your server is trying to use keep-alive. Set it to false at the start of your request handler if you want to stop the server from doing this.
I don't know if this helps you as I am building my framework for node 4.4+ but I have confirmed that you can send Connection: close header in your response to get node to close the connection.
let res = getResponseSomehow()
res.statusCode = 408
res.setHeader("Connection", "close")
res.end()
Also your destroy code could use the following tweak:
// First we give node the time to close the connection
// We can give it 2 seconds
let socket = getSocketSomehow();
let timer = setTimeout(function() {
socket.destroy();
}, 2000);
socket.on("close", function() {
clearTimeout(timer);
});
I'm not quite sure if it's the close event you want. I normally try to use a library and stay away from the net api so this is just a guess.