<< "[read] I/O error: Read timed out" immediately upon sending headers - http

We see time-outs during some calls to external REST service from within a Spring Boot application. They do not seem to occur when we connect to the REST service directly. Debug logging on org.apache.http has given us a very peculiar aspect of the failing requests: it contains an inbound log entry '<< "[read] I/O error: Read timed out"' in the middle of sending headers - the same millisecond the first headers were sent.
How can we see an inbound 'Read timed out' a few milliseconds after sending the first headers? And why does it not immediately interrupt the request/connection with a time-out, but instead waits the full 4500ms until it times out with an exception?
Here is our production log for a failing request, redacted. Note the 4500ms delay between lines two and three. My question is about the occurrence of http-outgoing-104 << "[read] I/O error: Read timed out" at 16:55:08.258, not the first one on line 2.
16:55:12.764 Connection released: [id: 104][route: {s}-><<website-redacted>>:443][total kept alive: 0; route allocated: 0 of 2; total allocated: 0 of 20]
16:55:12.763 http-outgoing-104 << "[read] I/O error: Read timed out"
16:55:08.259 http-outgoing-104 >> "<<POST Body Redacted>>"
16:55:08.259 http-outgoing-104 >> "[\r][\n]"
16:55:08.258 http-outgoing-104: set socket timeout to 4500
16:55:08.258 Executing request POST <<Endpoint Redacted>> HTTP/1.1
16:55:08.258 Target auth state: UNCHALLENGED
16:55:08.258 Proxy auth state: UNCHALLENGED
16:55:08.258 Connection leased: [id: 104][route: {s}-><<website-redacted>>:443][total kept alive: 0; route allocated: 1 of 2; total allocated: 1 of 20]
....
16:55:08.258 http-outgoing-104 >> "POST <<Endpoint Redacted>> HTTP/1.1[\r][\n]"
16:55:08.258 http-outgoing-104 >> "Accept: text/plain, application/json, application/*+json, */*[\r][\n]"
16:55:08.258 http-outgoing-104 >> Cookie: <<Redacted>>
16:55:08.258 http-outgoing-104 >> "Content-Type: application/json[\r][\n]"
16:55:08.258 http-outgoing-104 >> "Connection: close[\r][\n]"
16:55:08.258 http-outgoing-104 >> "X-B3-SpanId: <<ID>>[\r][\n]"
16:55:08.258 http-outgoing-104 << "[read] I/O error: Read timed out"
16:55:08.258 http-outgoing-104 >> "X-Span-Name: https:<<Endpoint Redacted>>[\r][\n]"
16:55:08.258 http-outgoing-104 >> "X-B3-TraceId: <<ID>>[\r][\n]"
16:55:08.258 http-outgoing-104 >> "X-B3-ParentSpanId: <<ID>>[\r][\n]"
16:55:08.258 http-outgoing-104 >> "Content-Length: 90[\r][\n]"
16:55:08.258 http-outgoing-104 >> "User-Agent: Apache-HttpClient/4.5.3 (Java/1.8.0_172)[\r][\n]"
16:55:08.258 http-outgoing-104 >> "Cookie: <<Redacted>>"
16:55:08.258 http-outgoing-104 >> "Host: <<Host redacted>>[\r][\n]"
16:55:08.258 http-outgoing-104 >> "Accept-Encoding: gzip,deflate[\r][\n]"
16:55:08.258 http-outgoing-104 >> "X-B3-Sampled: 1[\r][\n
Update 1: a second occurrence:
In another request that timed out the same behavior roughly occurs, but the timeout message is logged even before sending headers and eventually receiving the actual timeout. Note: this request is actually older, after it I have configured the request to include 'Connection: close' to circumvent a firewall dropping the connection under 'Keep Alive'.
19:28:08.102 http-outgoing-36 << "[read] I/O error: Read timed out"
19:28:08.102 http-outgoing-36: Shutdown connection
19:28:08.102 http-outgoing-36: Close connection
19:28:03.598 http-outgoing-36 >> "Connection: Keep-Alive[\r][\n]"
19:28:03.598 http-outgoing-36 >> "Content-Type: application/json;charset=UTF-8[\r][\n]"
...
19:28:03.598 http-outgoing-36 >> "Accept-Encoding: gzip,deflate[\r][\n]"
...
19:28:03.597 http-outgoing-36 >> Cookie: ....
19:28:03.597 http-outgoing-36 >> Accept-Encoding: gzip,deflate
19:28:03.597 http-outgoing-36 >> User-Agent: Apache-HttpClient/4.5.3 (Java/1.8.0_172)
19:28:03.596 Connection leased: [id: 36][route: {s}-><< Site redacted >>:443][total kept alive: 0; route allocated: 1 of 2; total allocated: 1 of 20]
19:28:03.596 http-outgoing-36: set socket timeout to 4500
19:28:03.596 Executing request POST HTTP/1.1
19:28:03.596 Target auth state: UNCHALLENGED
19:28:03.596 http-outgoing-36 << "[read] I/O error: Read timed out"
19:28:03.594 Connection request: [route: {s}-><< Site redacted >>:443][total kept alive: 1; route allocated: 1 of 2; total allocated: 1 of 20]
19:28:03.594 Auth cache not set in the context
Update 2: added HttpClientBuilder configuration
RequestConfig.Builder requestBuilder = RequestConfig.custom()
.setSocketTimeout(socketTimeout)
.setConnectTimeout(connectTimeout);
CloseableHttpClient httpClient = HttpClientBuilder.create()
.setDefaultRequestConfig(requestBuilder.build())
.build();
HttpComponentsClientHttpRequestFactory rf = new HttpComponentsClientHttpRequestFactory(httpClient);
return new RestTemplate(rf);

Related

ffmpeg coversion for rtmp stream, fname empty

I want to use HLS_Variant feature for the NGINX rtmp Module.
But if I follow the examples in the documentation I can't get it to work.
I have the following test:
application Test {
live on;
record off;
on_publish http://127.0.0.1/php/rtmp_auth.php;
on_publish_done http://127.0.0.1/php/on_publish_done.php;
exec_push /usr/local/bin/ffmpeg -loglevel debug -i rtmp://localhost:1935/Test/$name
-c:v libx264 -acodec aac -preset veryfast -b:v 256k -tune zerolatency -vf "scale=480:trunc(ow/a/2)*2" -f flv rtmp://localhost:1935/Z006/$name_low
-c:v libx264 -acodec aac -preset veryfast -b:v 768k -tune zerolatency -vf "scale=720:trunc(ow/a/2)*2" -f flv rtmp://localhost:1935/Z006/$name_mid
-c:v libx264 -acodec aac -preset veryfast -b:v 1024k -tune zerolatency -vf "scale=960:trunc(ow/a/2)*2" -f flv rtmp://localhost:1935/Z006/$name_high
-c:v libx264 -acodec aac -preset veryfast -b:v 1920k -tune zerolatency -vf "scale=1280:trunc(ow/a/2)*2" -f flv rtmp://localhost:1935/Z006/$name_higher
-c copy -f flv rtmp://localhost:1935/Z006/$name_src 2>>/tmp/Log.log; }
application Z006 {
live on;
record off;
hls on;
hls_path /usr/local/www/stream/tmp/hls0;
hls_nested on;
hls_variant _low BANDWIDTH=288000; # _low - Low bitrate, sub-SD resolution
hls_variant _mid BANDWIDTH=448000; # _mid - Medium bitrate, SD resolution
hls_variant _high BANDWIDTH=1152000; # _high - Higher-than-SD resolution
hls_variant _higher BANDWIDTH=2048000; # _higher - High bitrate, HD 720p resolution
hls_variant _src BANDWIDTH=4096000; # _src - Source bitrate, source resolution
}
As you can see there is nothing special about that.
The generated log looks like this:
ffmpeg version 4.4 Copyright (c) 2000-2021 the FFmpeg developers
built with FreeBSD clang version 10.0.1 (git#github.com:llvm/llvm-project.git llvmorg-10.0.1-0-gef32c611aa2)
configuration: --prefix=/usr/local --mandir=/usr/local/man --datadir=/usr/local/share/ffmpeg --pkgconfigdir=/usr/local/libdata/pkgconfig --disable-static --enable-shared --enable-pic --enable-gpl --enable-avresample --cc=cc --cxx=c++ ->
libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
libpostproc 55. 9.100 / 55. 9.100
Splitting the commandline.
Reading option '-loglevel' ... matched as option 'loglevel' (set logging level) with argument 'debug'.
Reading option '-i' ... matched as input url with argument 'rtmp://localhost:1935/Test//'.
Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'libx264'.
Reading option '-acodec' ... matched as option 'acodec' (force audio codec ('copy' to copy stream)) with argument 'aac'.
Reading option '-preset' ... matched as AVOption 'preset' with argument 'veryfast'.
Reading option '-b:v' ... matched as option 'b' (video bitrate (please use -b:v)) with argument '256k'.
Reading option '-tune' ... matched as AVOption 'tune' with argument 'zerolatency'.
Reading option '-vf' ... matched as option 'vf' (set video filters) with argument 'scale=480:trunc(ow/a/2)*2'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'flv'.
Reading option 'rtmp://localhost:1935/Z006//_low' ... matched as output url.
Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'libx264'.
Reading option '-acodec' ... matched as option 'acodec' (force audio codec ('copy' to copy stream)) with argument 'aac'.
Reading option '-preset' ... matched as AVOption 'preset' with argument 'veryfast'.
Reading option '-b:v' ... matched as option 'b' (video bitrate (please use -b:v)) with argument '768k'.
Reading option '-tune' ... matched as AVOption 'tune' with argument 'zerolatency'.
Reading option '-vf' ... matched as option 'vf' (set video filters) with argument 'scale=720:trunc(ow/a/2)*2'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'flv'.
Reading option 'rtmp://localhost:1935/Z006//_mid' ... matched as output url.
Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'libx264'.
Reading option '-acodec' ... matched as option 'acodec' (force audio codec ('copy' to copy stream)) with argument 'aac'.
Reading option '-preset' ... matched as AVOption 'preset' with argument 'veryfast'.
Reading option '-b:v' ... matched as option 'b' (video bitrate (please use -b:v)) with argument '1024k'.
Reading option '-tune' ... matched as AVOption 'tune' with argument 'zerolatency'.
Reading option '-vf' ... matched as option 'vf' (set video filters) with argument 'scale=960:trunc(ow/a/2)*2'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'flv'.
Reading option 'rtmp://localhost:1935/Z006//_high' ... matched as output url.
Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'libx264'.
Reading option '-acodec' ... matched as option 'acodec' (force audio codec ('copy' to copy stream)) with argument 'aac'.
Reading option '-preset' ... matched as AVOption 'preset' with argument 'veryfast'.
Reading option '-b:v' ... matched as option 'b' (video bitrate (please use -b:v)) with argument '1920k'.
Reading option '-tune' ... matched as AVOption 'tune' with argument 'zerolatency'.
Reading option '-vf' ... matched as option 'vf' (set video filters) with argument 'scale=1280:trunc(ow/a/2)*2'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'flv'.
Reading option 'rtmp://localhost:1935/Z006//_higher' ... matched as output url.
Reading option '-c' ... matched as option 'c' (codec name) with argument 'copy'.
Reading option '-f' ... matched as option 'f' (force format) with argument 'flv'.
Reading option 'rtmp://localhost:1935///_src' ... matched as output url.
Finished splitting the commandline.
Parsing a group of options: global .
Applying option loglevel (set logging level) with argument debug.
Successfully parsed a group of options.
Parsing a group of options: input url rtmp://localhost:1935/Test//.
Successfully parsed a group of options.
Opening an input file: rtmp://localhost:1935/Test//.
[NULL # 0x80609a000] Opening 'rtmp://localhost:1935/Test//' for reading
[rtmp # 0x806089300] No default whitelist set
[tcp # 0x806089380] No default whitelist set
[tcp # 0x806089380] Original list of addresses:
[tcp # 0x806089380] Address 127.0.0.1 port 1935
[tcp # 0x806089380] Address ::1 port 1935
[tcp # 0x806089380] Interleaved list of addresses:
[tcp # 0x806089380] Address 127.0.0.1 port 1935
[tcp # 0x806089380] Address ::1 port 1935
[tcp # 0x806089380] Starting connection attempt to 127.0.0.1 port 1935
[tcp # 0x806089380] Successfully connected to 127.0.0.1 port 1935
[rtmp # 0x806089300] Handshaking...
[rtmp # 0x806089300] Type answer 3
[rtmp # 0x806089300] Server version 13.14.10.13
[rtmp # 0x806089300] Proto = rtmp, path = /Test//, app = Test/, fname =
[rtmp # 0x806089300] Window acknowledgement size = 5000000
[rtmp # 0x806089300] Max sent, unacked = 5000000
[rtmp # 0x806089300] New incoming chunk size = 4096
[rtmp # 0x806089300] Creating stream...
[rtmp # 0x806089300] Sending play command for ''
I think the error is clearly the Empty 'fname'.
But I don't know what to do about that?
EDIT:
Even if I change $name to my Streamname which I use in OBS the conversion isn't started.
I think I got it working.
After days of testing and rebuilding with debug support I found the issue.
All starts when you use the combination of:
on_publish
exec
php-fpm
The on_publish request is handled first via php-fpm. It produces the following logs:
2021/11/29 09:50:42 [debug] 90542#0: epoll timer: 9996
2021/11/29 09:50:42 [debug] 90542#0: epoll: fd:15 ev:2005 d:00007FEFED509668
2021/11/29 09:50:42 [debug] 90542#0: *3 http upstream request: "/php/rtmp_auth.php?"
2021/11/29 09:50:42 [debug] 90542#0: *3 http upstream process header
2021/11/29 09:50:42 [debug] 90542#0: *3 malloc: 000056047409F050:4096
2021/11/29 09:50:42 [debug] 90542#0: *3 posix_memalign: 00005604740A0060:4096 #16
2021/11/29 09:50:42 [debug] 90542#0: *3 recv: eof:1, avail:-1
2021/11/29 09:50:42 [debug] 90542#0: *3 recv: fd:15 80 of 4096
2021/11/29 09:50:42 [debug] 90542#0: *3 http fastcgi record byte: 01
2021/11/29 09:50:42 [debug] 90542#0: *3 http fastcgi record byte: 06
2021/11/29 09:50:42 [debug] 90542#0: *3 http fastcgi record byte: 00
2021/11/29 09:50:42 [debug] 90542#0: *3 http fastcgi record byte: 01
2021/11/29 09:50:42 [debug] 90542#0: *3 http fastcgi record byte: 00
2021/11/29 09:50:42 [debug] 90542#0: *3 http fastcgi record byte: 36
2021/11/29 09:50:42 [debug] 90542#0: *3 http fastcgi record byte: 02
2021/11/29 09:50:42 [debug] 90542#0: *3 http fastcgi record byte: 00
2021/11/29 09:50:42 [debug] 90542#0: *3 http fastcgi record length: 54
2021/11/29 09:50:42 [debug] 90542#0: *3 http fastcgi parser: 0
2021/11/29 09:50:42 [debug] 90542#0: *3 http fastcgi header: "Location: /"
2021/11/29 09:50:42 [debug] 90542#0: *3 http fastcgi parser: 0
2021/11/29 09:50:42 [debug] 90542#0: *3 http fastcgi header: "Content-type: text/html; charset=UTF-8"
2021/11/29 09:50:42 [debug] 90542#0: *3 http fastcgi parser: 1
2021/11/29 09:50:42 [debug] 90542#0: *3 http fastcgi header done
2021/11/29 09:50:42 [debug] 90542#0: *3 HTTP/1.1 302 Moved Temporarily
Server: nginx/1.20.2
Date: Mon, 29 Nov 2021 08:50:42 GMT
Content-Type: text/html; charset=UTF-8
Connection: close
Location: /
As u can see, fastcgi sets the http code to 302 with Location "/".
This Location is the $name variable used for exec!
So we have to force this Location to be our streamname.
Example rtmp_auth.php
<?php
$streamName = $_POST["name"];
//Check for streamkey here
//If okay, set Location and Return Code (Set 403 if Auth failed)
header('Location: '.$streamName, TRUE, 200);
?>
After we Set the new Location manually the Log Changes to:
2021/11/29 09:57:56 [debug] 90618#0: epoll timer: 10000
2021/11/29 09:57:56 [debug] 90618#0: epoll: fd:15 ev:2005 d:00007F491020B668
2021/11/29 09:57:56 [debug] 90618#0: *3 http upstream request: "/php/rtmp_auth.php?"
2021/11/29 09:57:56 [debug] 90618#0: *3 http upstream process header
2021/11/29 09:57:56 [debug] 90618#0: *3 malloc: 000055F9E1EC9050:4096
2021/11/29 09:57:56 [debug] 90618#0: *3 posix_memalign: 000055F9E1ECA060:4096 #16
2021/11/29 09:57:56 [debug] 90618#0: *3 recv: eof:1, avail:-1
2021/11/29 09:57:56 [debug] 90618#0: *3 recv: fd:15 88 of 4096
2021/11/29 09:57:56 [debug] 90618#0: *3 http fastcgi record byte: 01
2021/11/29 09:57:56 [debug] 90618#0: *3 http fastcgi record byte: 06
2021/11/29 09:57:56 [debug] 90618#0: *3 http fastcgi record byte: 00
2021/11/29 09:57:56 [debug] 90618#0: *3 http fastcgi record byte: 01
2021/11/29 09:57:56 [debug] 90618#0: *3 http fastcgi record byte: 00
2021/11/29 09:57:56 [debug] 90618#0: *3 http fastcgi record byte: 40
2021/11/29 09:57:56 [debug] 90618#0: *3 http fastcgi record byte: 00
2021/11/29 09:57:56 [debug] 90618#0: *3 http fastcgi record byte: 00
2021/11/29 09:57:56 [debug] 90618#0: *3 http fastcgi record length: 64
2021/11/29 09:57:56 [debug] 90618#0: *3 http fastcgi parser: 0
2021/11/29 09:57:56 [debug] 90618#0: *3 http fastcgi header: "Location: Test_input"
2021/11/29 09:57:56 [debug] 90618#0: *3 http fastcgi parser: 0
2021/11/29 09:57:56 [debug] 90618#0: *3 http fastcgi header: "Content-type: text/html; charset=UTF-8"
2021/11/29 09:57:56 [debug] 90618#0: *3 http fastcgi parser: 1
2021/11/29 09:57:56 [debug] 90618#0: *3 http fastcgi header done
2021/11/29 09:57:56 [debug] 90618#0: *3 HTTP/1.1 302 Moved Temporarily
Server: nginx/1.20.2
Date: Mon, 29 Nov 2021 08:57:56 GMT
Content-Type: text/html; charset=UTF-8
Connection: close
Location: Test_input
Now the $name Variable is set for next use with exec Command.
I hope someone can save some Hours of testing with this Information.

OkHttp3: Getting an 'unexpected end of stream' exception while reading a large HTTP response

I have a Java client, that is making a POST call to the v1/graphql endpoint of a Hasura server (v1.3.3)
I'm making the HTTP call using the Square okhttp3 library (v4.9.1). The data transfer is happening over HTTP1.1, using chunked transfer-encoding.
The client is failing with the following error:
Caused by: java.net.ProtocolException: unexpected end of stream
at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:415) ~[okhttp-4.9.1.jar:?]
at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276) ~[okhttp-4.9.1.jar:?]
at okio.RealBufferedSource.read(RealBufferedSource.kt:189) ~[okio-jvm-2.8.0.jar:?]
at okio.RealBufferedSource.exhausted(RealBufferedSource.kt:197) ~[okio-jvm-2.8.0.jar:?]
at okio.InflaterSource.refill(InflaterSource.kt:112) ~[okio-jvm-2.8.0.jar:?]
at okio.InflaterSource.readOrInflate(InflaterSource.kt:76) ~[okio-jvm-2.8.0.jar:?]
at okio.InflaterSource.read(InflaterSource.kt:49) ~[okio-jvm-2.8.0.jar:?]
at okio.GzipSource.read(GzipSource.kt:69) ~[okio-jvm-2.8.0.jar:?]
at okio.Buffer.writeAll(Buffer.kt:1642) ~[okio-jvm-2.8.0.jar:?]
at okio.RealBufferedSource.readString(RealBufferedSource.kt:95) ~[okio-jvm-2.8.0.jar:?]
at okhttp3.ResponseBody.string(ResponseBody.kt:187) ~[okhttp-4.9.1.jar:?]
Request Headers:
INFO: Content-Type: application/json; charset=utf-8
INFO: Content-Length: 1928
INFO: Host: localhost:10191
INFO: Connection: Keep-Alive
INFO: Accept-Encoding: gzip
INFO: User-Agent: okhttp/4.9.1
Response headers:
INFO: Transfer-Encoding: chunked
INFO: Date: Tue, 27 Apr 2021 12:06:39 GMT
INFO: Server: Warp/3.3.10
INFO: x-request-id: d019408e-e2e3-4583-bcd6-050d4a496b11
INFO: Content-Type: application/json; charset=utf-8
INFO: Content-Encoding: gzip
This is the client code used for the making the POST call:
private static final MediaType MEDIA_TYPE_JSON = MediaType.parse("application/json; charset=utf-8");
private static OkHttpClient okHttpClient = new OkHttpClient.Builder()
.connectTimeout(30, TimeUnit.SECONDS)
.writeTimeout(5, TimeUnit.MINUTES)
.readTimeout(5, TimeUnit.MINUTES)
.addNetworkInterceptor(loggingInterceptor)
.build();
public GenericHttpResponse httpPost(String url, String textBody, GenericHttpMediaType genericMediaType) throws HttpClientException {
RequestBody body = RequestBody.create(okHttpMediaType, textBody);
Request postRequest = new Request.Builder().url(url).post(body).build();
Call postCall = okHttpClient.newCall(okHttpRequest);
Response postResponse = postCall.execute();
return GenericHttpResponse
.builder()
.body(okHttpResponse.body().string())
.headers(okHttpResponse.headers().toMultimap())
.code(okHttpResponse.code())
.build();
}
This failure is only happening for large response sizes. As per the server logs, the response size (after gzip encoding) is around 52MB, but the call is still failing. This same code has been working fine for response sizes around 10-15MB.
I tried replicating the same issue through a simple cURL call, but that ran successfully:
curl -v -s --request POST 'http://<hasura_endpoint>/v1/graphql' \
--header 'Content-Type: application/json' \
--header 'Accept-Encoding: gzip, deflate, br' \
--data-raw '...'
* Trying ::1...
* TCP_NODELAY set
* Connected to <host> (::1) port <port> (#0)
> POST /v1/graphql HTTP/1.1
> Host: <host>:<port>
> User-Agent: curl/7.64.1
> Accept: */*
> Content-Type: application/json
> Accept-Encoding: gzip, deflate, br
> Content-Length: 1840
> Expect: 100-continue
>
< HTTP/1.1 100 Continue
} [1840 bytes data]
* We are completely uploaded and fine
< HTTP/1.1 200 OK
< Transfer-Encoding: chunked
< Date: Tue, 27 Apr 2021 11:59:24 GMT
< Server: Warp/3.3.10
< x-request-id: 27e3ff3f-8b95-4328-a1bc-a5492e68f995
< Content-Type: application/json; charset=utf-8
< Content-Encoding: gzip
<
{ [6 bytes data]
* Connection #0 to host <host> left intact
* Closing connection 0
So I'm assuming that this error is specific to the Java client.
Based on suggestions provided in similar posts, I tried the following other approaches:
Adding a Connection: close header to the request
Sending Transfer-Encoding: gzip header in the request
Setting the retryOnConnectionFailure for the OkHttp client to true
But none of these approaches were able to resolve the issue.
So, my questions are:
What could be the underlying cause for this issue? Since I'm using chunked transfer encoding here, I suppose it's not due to an incorrect content-length header passed in the response.
What are the approaches I can try for debugging this further?
Would really appreciate any insights on this. Thank you.

libcurl write callback is not called for post http message

Intro
I'm sending POST request to server which responses with chunked messages.
So I'm trying to make writecallback being called on each received chunked http message.
Code
#include <iostream>
#include <string>
#include <curl/curl.h>
using namespace std;
size_t write_callback(char *d, size_t n, size_t l, void *userp)
{
cerr << ""--- Called once" << endl;
return n*l;
}
string xml_msg()
{
return "<<some request data>>";
}
curl_slist* get_header(size_t content_length)
{
auto list = curl_slist_append(nullptr, "<<protocol version>>");
list = curl_slist_append(list, "Content-Type: text/xml");
list = curl_slist_append(list, "Content-Length: " + content_length);
return list;
}
void main()
{
auto xml = xml_msg();
curl_global_init(CURL_GLOBAL_ALL);
auto curl = curl_easy_init();
curl_easy_setopt(curl, CURLOPT_URL, "<<server url>>");
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_callback);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, nullptr);
curl_easy_setopt(curl, CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 6.0)");
curl_easy_setopt(curl, CURLOPT_HTTPAUTH, CURLAUTH_BASIC);
curl_easy_setopt(curl, CURLOPT_USERPWD, "<<user credentials>>");
curl_easy_setopt(curl, CURLOPT_HTTPHEADER, get_header(xml.size()));
curl_easy_setopt(curl, CURLOPT_POST, 1);
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, xml.data());
curl_easy_setopt(curl, CURLOPT_HTTP_CONTENT_DECODING, 0L);
curl_easy_perform(curl);
curl_easy_cleanup(curl);
curl_global_cleanup();
}
Verbose log
* STATE: INIT => CONNECT handle 0x15c4de0; line 1422 (connection #-5000)
* Added connection 0. The cache now contains 1 members
* STATE: CONNECT => WAITRESOLVE handle 0x15c4de0; line 1458 (connection #0)
* Trying xxx.xxx.xxx.xxx...
* TCP_NODELAY set
* STATE: WAITRESOLVE => WAITCONNECT handle 0x15c4de0; line 1539 (connection #0)
* Connected to <<host>> (xxx.xxx.xxx.xxx) port 80 (#0)
* STATE: WAITCONNECT => SENDPROTOCONNECT handle 0x15c4de0; line 1591 (connection #0)
* Marked for [keep alive]: HTTP default
* STATE: SENDPROTOCONNECT => PROTOCONNECT handle 0x15c4de0; line 1605 (connection #0)
* STATE: PROTOCONNECT => DO handle 0x15c4de0; line 1626 (connection #0)
* Server auth using Basic with user '<<credentials>>'
> POST <<URL>>
Host: <<host>>
Authorization: Basic <<base64 credentials>>
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0)
Accept: */*
Content-Type: text/xml
Content-Length: 204
* upload completely sent off: 204 out of 204 bytes
* STATE: DO => DO_DONE handle 0x15c4de0; line 1688 (connection #0)
* STATE: DO_DONE => WAITPERFORM handle 0x15c4de0; line 1813 (connection #0)
* STATE: WAITPERFORM => PERFORM handle 0x15c4de0; line 1823 (connection #0)
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 200 OK
< Date: Tue, 08 May 2018 12:29:49 GMT
* Server is not blacklisted
< Server: <<server>>
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< Content-Language: en-US
< Cache-Control: no-cache, no-store
< Pragma: no-cache
< Content-Type: application/xml;charset=UTF-8
< Set-Cookie: <<cookie>>
< Transfer-Encoding: chunked
<
--- Called once
* STATE: PERFORM => DONE handle 0x15c4de0; line 1992 (connection #0)
* multi_done
* Connection #0 to <<server>> left intact
Problem
Writecallback has been called when connection had been closed by server due to timeout with FIN tcp packet instead of moment when chunked http response has been received.
It is about 30 secs interval between this events.
Question
What am I doing wrong?
Update 1
Server returns a tcp segment with PUSH flag and http message with chunked transfer encoding containing XML. Message ends with CLRF. Meanwhile Win API socket does not allow to read it and select() returns 0, which means that there is nothing to read/write on this socket.
After 30 secs delay before closing connection due to heartbeat timeout (that is internal implementation of the server), server sends finalizing http message with chunked transfer encoding, which contains 0 and CLRF. After that message select() displays new socket state and libcurl calls write callback with returning chunked message content.
That is what I see after debuging libcurl. I need to find out the way to get chunked http message returned by libcurl once it is received, not after getting final http message.
Ok, I was able to find out that the problem is with Win Api sockets. On linux builds libcurl calls writecallback right after receiving chuncked message. I'm not sure how to fix that issue with Win builds, but at least I found rootcause of problem.

Asterisk ARI / phpari - Bridge recording: "Recording not found"

I'm using phpari with Asterisk 13 and trying to record a bridge (mixing type).
In my code:
$this->phpariObject->bridges()->bridge_start_recording($bridgeID, "debug", "wav");
It returns:
array(4) {
["name"]=>
string(5) "debug"
["format"]=>
string(3) "wav"
["state"]=>
string(6) "queued"
["target_uri"]=>
string(15) "bridge:5:1:503"
}
When and I stop and save with
$this->phpariObject->recordings()->recordings_live_stop_n_store("debug");
It returns FALSE.
I debug with
curl -v -u xxxx:xxxx -X POST "http://localhost:8088/ari/recordings/live/debug/stop"
Result:
* About to connect() to localhost port 8088 (#0)
* Trying ::1... Connection refused
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 8088 (#0)
* Server auth using Basic with user 'xxxxx'
> POST /ari/recordings/live/debug/stop HTTP/1.1
> Authorization: Basic xxxxxxx
> User-Agent: curl/7.19.7 (xxxxx) libcurl/7.19.7 NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: localhost:8088
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: Asterisk/13.2.0
< Date: Thu, 19 Feb 2015 11:58:18 GMT
< Cache-Control: no-cache, no-store
< Content-type: application/json
< Content-Length: 38
<
{
"message": "Recording not found"
* Connection #0 to host localhost left intact
* Closing connection #0
}
Asterisk CLI verbose 5 trace: http://pastebin.com/QZXnpXVA
So, I've solved the problem.
It was a simple write permission problem.
Asterisk user couldn't write on /var/spool/asterisk/recording because it was owned by root.
Changing the ownership to the asterisk user solved it.
I detected this problem by looking at the Asterisk CLI trace again:
-- x=0, open writing: /var/spool/asterisk/recording/debug format: sln, (nil)
This (nil) indicates that the file could not be written, so I checked the folder and saw where the problem was.

Async responses with Aleph aren't being received over IPv4 but are with IPv6

I'm trying to get server-sent events set up in Clojure with Aleph, but it's just not working over IPv4. Everything is fine if I connect over IPv6. This occurs both on Linux and MacOS. I've got a full example of what I'm talking about on GitHub.
I don't think I'm doing anything particularly fancy. The whole code is up on GitHub, but essentially my program is:
(def my-channel (permanent-channel))
(defroutes app-routes
(GET "/events" []
{:headers {"Content-Type" "text/event-stream"}
:body my-channel}))
(def app
(handler/site app-routes))
(start-server (wrap-ring-handler app) {:port 3000}))
However, when I connect to 127.0.0.1:3000, I can see curl sending the request headers, but it just hangs, never printing the response headers:
$ curl -vvv http://127.0.0.1:3000/events
* About to connect() to 127.0.0.1 port 3000 (#0)
* Trying 127.0.0.1...
* Adding handle: conn: 0x7f920a004400
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7f920a004400) send_pipe: 1, recv_pipe: 0
* Connected to 127.0.0.1 (127.0.0.1) port 3000 (#0)
> GET /events HTTP/1.1
> User-Agent: curl/7.30.0
> Host: 127.0.0.1:3000
> Accept: */*
If I connect over IPv6 the response comes right away, and events that I enqueue in the channel get sent correctly:
$ curl -vvv http://localhost:3000/events
* Adding handle: conn: 0x7f943c001a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7f943c001a00) send_pipe: 1, recv_pipe: 0
* About to connect() to localhost port 3000 (#0)
* Trying ::1...
* Connected to localhost (::1) port 3000 (#0)
> GET /events HTTP/1.1
> User-Agent: curl/7.30.0
> Host: localhost:3000
> Accept: */*
>
< HTTP/1.1 200 OK
* Server aleph/0.3.0 is not blacklisted
< Server: aleph/0.3.0
< Date: Tue, 15 Apr 2014 12:27:05 GMT
< Connection: keep-alive
< Content-Type: text/event-stream
< Transfer-Encoding: chunked
I have also reproduced this behaviour in Chrome. In both the IPv4 and IPv6 cases, tcpdump shows that the response headers are going over the wire.
This behaviour occurs both with lein run and an uberjar. It also occurs if I execute the uberjar with -Djava.net.preferIPv4Stack=true.
How do I get my application to behave the same over IPv4 as over IPv6?

Resources